In my presentation, I’ll talk about 3 lessons (or maybe more) that I’ve learned when migrating from Sitecore 9 on Azure PaaS to Sitecore 10 on containers. Each phase (my first experience, optimizing the docker strategy and making the most out of the platform) will be part of this presentation. I’ll give you the answer to the populair question “does Sitecore require me to migrate to containers”?
This will NOT be a “getting started” session, but isn’t an “advanced concepts” session either. Just a fun session from a guy who had fun migrating from one technology to another.
A few days ago, Microsoft explained on their devblog how to scan nuget packages for security vulnerabilities. This is a feature which was recently released, but has been on the github issue list for quite some time. Microsoft uses the Github Adivsory Database to identify vulnerabilities in nuget packages, click here for more information. Microsoft added the vulnerability check to their dotnet tooling. Just run a dotnet list package –vulnerable, (make sure to update visual studio or .net 5.0!!) and a nice overview of vulnerable packages is shown. However, this only works with the PackageReference format. In our situation, we are still using the old packages.config format in hundreds of projects, as we cannot migrate to the PackageReference format yet. This old format can’t benefit from this lovely gem; That’s why I decided to create a little script in order to get an overview of (possible) vulnerabilities in our code bases. The script can be found here.
Response status code does not indicate success: 401 (Unauthorized).
Although I was pretty sure that the FEED_ACCESSTOKEN, which is required for correct authentication, was correctly set in my environment file, the docker-build still falled back to an old value.
Emptying the cache, deleting images: nothing helped. It appeared that I had set the environment variable for this same FEED_ACCESSTOKEN on system-level as well. Apparently, the global environment variable takes precedence over the locally set variable.
Two solutions are possible here:
run $env:FEED_ACCESSTOKEN =”” before you run your actual build
simply delete the FEED_ACCESSTOKEN from your environment variables.
Thanks for reading another episode of “Once bitten, twice shy.”
When hosting high traffic websites, it’s important to keep them up and running at all times. At the moment one of them goes down, it might lead to a conversion loss or decrease in NPS. Detection of unplanned downtime is very important in these cases. In some cases, there isn’t even downtime, but *something* in the infrastructure prevents the website from loading (I’ll explain a few cases after the break). This blogpost will teach you how to use your visitors as a continuous monitoring beacon. Code can be found here. Also a small shoutout to my colleague Marten Bonnema who created an AI-plugin which *does* work with serviceworkers.
In our company, we use Unicorn for content serialization, in order to be able to deploy “applicative” content like templates across our environments. For dev and test, we also provide content that we use for regression testing in these environments; we don’t (want to) sync our production content to these environments. We also had the wish to spin up environments upon request, with all of this content available in an instant, for example to validate pull requests. With 20000 yml files, the synchronization process takes at least 45 minutes: this takes way too long for a fast regression test and doesn’t fit in a fast “shift left” strategy. With the introduction of containers, things have changed, as full pre-provisioned environments can be spinned up in literally minutes.
Note 1: My current opinion is that this is not a feasible way to deploy content into production! Note 2: I recently found out that this is the same approach as the demo team uses to provide their Lightroom demo
After following the “getting started” guide by Nick Wesselman, I had my first Sitecore 10 environment up and running in Sitecore, so there is no need to write about the convenient installation. But being new to Docker and (thus) new to the new approach that Sitecore uses for these development environments, I struggled a little bit in understanding how everything worked together. I wanted to know about the structure, dependencies. As I couldn’t find any blogpost on the new structure/setup and how all the roles correlate to each other and how the dependencies are working, I decided to dive into it and share it. Note: there is a lot of information on the Sitecore DevEx Containers documentation site and it explains how things can/should be achieved, I can really recommend this site.
In our road towards realtime personalization, we were in need of reloading our xDB contact on every request, as external systems might have updated several facets with information that could or should be used within Sitecore. Out of the box, this does not happen.
Why changes to the contact xDB do not reflect to the contact in Sitecore
The problem within Sitecore lies within how and when xDB contacts are retrieved from the xDB. let’s take a look at the diagram below:
In the sequence diagram, it becomes clear that after the start of the Session, a contact will be retrieved from the xDB. This is the only time in de default lifecycle of a session that this happens, which means that, whenever an update to the xDB contact will be written to the xDB, this change does not reflect the state within the private session database. In order to be able to reflect those changes, the contact needs to be reloaded. This can be done using the following code:
The code consists of three parts:
Ensure that the contact exists.
When the “IsNew” property has been set to true, the contact only exists in the Sitecore environment. An explicit save is needed, before the contact can be reloaded. This is only the case when the visitor doesn’t send a SC_GLOBAL_ANALYTICS_COOKIE – this is a persistent cookie which is stored over sessions and contains an identifier which can be used to identiy a user in the xDB. When this information is not available, the contact will be marked as “IsNew”. whenever a user leaves information, which can be used to identify this user, a merge of contacts can be executed.
Remove the contact from the current session
By removing a contact entirely from the current session, his interactions and goals will be saved, but the contact details and its facets will be reloaded upon the next request.
Explicitly reload the contact
When the contact is removed from the session, the contact can be reloaded explicitly. By removing the contact from the session at the start of the request and reloading that same contact immediately, all the latest, fresh information for this contact, with its facets, will be made availabe to sitecore.
The default working of Sitecore loads a contact into the session, but does not sync updates to the xDB immediately to Sitecore. By explicitly removing and reloading the contact at the start of a request, all the latest changes to a contact can be made availabe to sitecore. This data can be used to for, for example, smart personalizations.
This blogpost describes how to add the Azure Artifact nuget credential provider to a windows based docker container for building .Net (full framework) solutions, using authenticated Azure DevOps artifacts feeds. As I couldn’t find a feasible solution, I decided to write a quick guide on how to set this up. This blogpost makes use of the provided Dockerfile structure that Sitecore provides, but the learnings can be applied in any solution. In other words: this post is not tied to the Sitecore ecosystem. To skip immediately to the instructions, click this link
Note: It has been a while that I was really, really, really enthusiastic about a new release of Sitecore, but this Sitecore 10 release, it’s just: WOW. Sitecore has finally put an enormous effort into making new(ish) technology, such as containers, .net core, real CI/CD, command line automation available to their developers. That, together with the new, supported, serialization solution, Sitecore made a giant leap towards a complete, modern developer experience. This blogpost describes how a private Azure Devops Artififact nuget feed can be used in conjunction with the Sitecore Docker setup.
In my Sitecore symposium session “Sitecore in the
enterprise: Optimize your time to deliver toward the speed of light, using all
the new love in Azure DevOps” (and yes that is a mouth full) I spend quite some
time on the “mono-package” approach. This blogpost what a mono-package is and
why it is (much) faster in terms of deployments as opposed to using multiple
web deployment packages.
Disclaimer 1:In this blogpost I talk (a bit) about
Sitecore, but it is applicable for any application that is being deployed using
msdeploy or the app service task in Azure DevOps. The blogpost “Sitecore challenges
on the mono-package approach” contains the specific challenges faced that had
to be solved.
Disclaimer 2: Some people might immediately be opinionated: “how could you ever end up with multiple packages, you have a bad architecture”. I understand your pain and in some cases you might be right. But there are cases where you have to deal with external factors, such as existing platforms and organizational challenges, where a layered approach is a not to weird solution.