📄️ Setting up the Data Layer Monitor
Once you have secured your login to the Data Layer Monitor UI, the first step is to start tracking datalayer events, in order to start the validation process. The way in which we do that is by deploying a 'shadow pixel', a tag in your tag management system that gathers all pushed datalayer payloads with the relevant meta information.
📄️ A guide to your first datalayer configuration
Configuration refers to the process of defining the desired state in which your datalayer and the pushed events operate. Without the definition of a desired state, there would be nothing to validate the incoming events. In practice, you will be defining each event that you expect to come in from your implementation, and all the parameters that are sent along with it. Furthermore, you will state the expected content of these incoming parameters wherever relevant.
📄️ setting up user alert channels (in Settings)
Most of the settings are highover explained in the setting.md page, so this page will focus primarily on making sure you are able to set up alerting channels properly
📄️ Interpreting and acting upon alerts
Once you've got everything setup, your next step will be to interpret and act upon any alerts that might be inbound. The below section can help you navigating the incoming alerts.
📄️ Live datalayer checker (Beta)
As part of fully integrating with the process of data collection, we have created a live checker functionality as part of the shadowpixel code setup. This is currently only available with GTM, please get in contact with us if you wish this to work in another environment as well.
📄️ Using documentation for new features
Since a big part of the effort around the DLM is to define the accepted way of collecting data, we’ve made an effort to use this as a central source of documentation that can be used to document what is there, and help development with instructions how to properly track events.
📄️ Using anomaly detection on even counts (Beta)
Based on daily event counts we run an anomaly detection model, that spots any outliers in the number of events being measured per event. If the number of measured events is higher or lower than the 'upper bound' or 'lower bound' determined from the trained model, an anomaly will be defined and shown in the graph on the UI homepage.