From “Legacy” to Current
I currently work as part of the Digital Media & Marketing Team at BESTSELLER. The team’s main product is the DAM (Digital Asset Management) system that maintains and shared BESTSELLER’s Digital Assets across the organization. We also occasionally handle other products that end up in the realm of “Media & Marketing”, but that’s another discussion.
The biggest focus I’ve had, was maintaining the previous DAM solution which was a COTS (Commercial Off-the-Shelf Software) system. I quite early on, identified the dire need for a swifter and more efficient Digital Asset Management system. It could take over a day for some newly uploaded Assets to make it into the system, and not only that, the version of the COTS ran in BESTSELLER was the last version available to be ran on-prem. Meaning that we’d eventually hit issues with unsupported Operating Systems or other security issues. With the availability of high-resolution product images being a key factor in driving our online retail sales, I embarked on the creation of a replacement system called “StorM” (Store Media).
As an integral part of the development team, I played a significant role in conceptualizing a migration path from the previous system. This was done with meticulous care to ensure minimal pain points for our current users and integrated systems. Through implementation of languages and frameworks such as C#, .NET 6 -> 8, Typescript, React, and Next.js, we (I couldn’t do this on my own!) managed to create a system offering much better performance compared to its predecessor. As well as strengthen system to system integration through tighter data control and data quality.
General basics on the old and new DAM such as improving application stability, by facilitating insights into telemetry data so things could actually be fixed. This was achieved by creating a logging/monitoring library for sharing across .NET projects, which is available to all Tech Teams in BESTSELLER to use. The library significantly aided the migration from logging platforms like Graylog, Datadog, and then to ELK stack, thus enabling faster diagnosis and issue resolution. More recently, this has now moved towards an OpenTelemetry centric solution, further decoupling the StorM system from any specific Telemetry/Monitoring provider.
Tooling and Way of Working
In terms of tooling, we work with a broad selection of technologies. These span across C#, .NET, Typescript, and React to backend systems, including Azure and Google Cloud Platform (GCP). Through meticulous use of Git and GitHub, I transitioned the projects in the team seamlessly from Bitbucket, implementing a Fork & Pull Request Workflow to enhance knowledge sharing and code quality within the team. Before this, everything was being handled by externals with no controls on specific code styles, ways of working or even just checking that committed code didn’t have bugs.
Our way of working was further exemplified by how we automated our build pipeline with CircleCI, transitioning from a manual msbuild, to a NUKE based build script. We also migrated from an on-prem/in-house Webhook implementation to Azure EventGrid, improving stability, scalability, and granting better insights and metrics. With StorM, everything is deployed in an automated way. Using Terraform for IaC, and GitHub Workflows/Actions, and some Bash scripts.
When joining the team, I took pride in setting up and guiding the team into using Confluence for documentation, which ensured that documentation was more or less in one place. Using previous experience from Coolblue, I could suggest some tried and tested ways of working.
MVP for API / EDI
When COVID hit, resources were more scarce. There was an idea for a generic product sharing API/system where all product data in BESTSELLER could be gathered from. Somehow, I ended up being the developer for this and ended up building an MVP for this BESTSELLER-wide data sharing platform via API, EDI, or other transfer methods. To this day, the project is still in use - though owned by another team as was suggested by myself. Such a project really needs a good back-bone of a team in order to ensure that knowledge is shared, and critical things such as monitoring and scaling are agreed within the team.
Ongoing
I’m still at BESTSELLER, and am still continuously pushing myself to ensure that StorM is as good as a system that we can manage with the resources we have. I’m now also an Architect, which means I now spend a bit more time considering more BESTSELELR Tech wide “rules” or best practices as part of an Architect Community. The purpose is to try and skill up the Tech across the company, and improve alignment across teams to ensure that data can be shared in an easier, but more robust way.