This year’s global situation continues to move towards a “storm” of geopolitical, social, and economic concerns, leading to a rise in conflict and isolationist tendencies, as well as a shift towards de-globalization.
Eventually, this shift will significantly impact Information Technology, requiring data and analytics professionals to adjust to more fragmentation, disrupted supply chains, nonstop innovation, and hampered access to skilled labor. Finally, in a world where crisis is constant, being able to react in the moment and anticipate what's coming next is becoming a core competency!
The Top 10 Trends 2023
Pre-acting is the key
The demands are increasing, and 2023 wants to leverage the capabilities of Real-time infrastructures to benefit from the rising opportunities. Users need to have access to Real-Time data from the moment they are created in order to evaluate the current position in their company.
Former decisions with AI & automation
Automation of decisions should be combined with real-time data in place! According to Gartner, 95% of data-based decisions can be at least partially automated.
AI & Automation can significantly decrease the time it takes for people to find data and assist them in taking former decisions for more complex tasks within a shortened time.
Natural language processing (NLP)
Several natural language models have been created and trained on massive troves of data using deep neural-network machine learning. In terms of BI tools, Natural Language allows humans to ask for their data insights by just talking normally and asking for requests.
Augmented analytics to create data stories
“Provide the right information to the right user at the right time”. This quote has been said for decades, but today is more critical than ever in a fragmented world where data is distributed and time is scarce, it’s tougher to do.
Businesses are stimulating an analytics culture, and data storytelling is becoming essential. Data analysts are nurtured to create a step-by-step approach to every result. This practice is called “Storytelling” and adds a narrative required to put insights into action.
Data Quality Management
Undoubtedly, big data has a tremendous impact on the operations of any business.
However, data should be accurate, up-to-date, consistent, and complete; otherwise, it can destroy business value and deplete profitability. For example, IBM intimates that businesses in the US alone lose $3.1 trillion annually because of poor data quality (IBM).
Data quality management is continually shifting to the centre stage of BI. Data quality management (DQM) provides insights into data pumping through a business. It improves the data governance framework while it ensures that data used for analysis can provide better insights into the business status and operations.
More AI to support humans on manual preparation tasks
AI is providing more insights than ever, even impossible for us until now. What about moving those components deeper into the data pipeline before an application or dashboard has even been built? Using AI in Data Management will automate more rote tasks in data engineering. For example, based on Qlik, automation of algorithms and reporting can help you in your “insight” journey.
Nevertheless, this pipeline doesn’t mean less human involvement. Instead, AI will bring the automation of some manual tasks helping data engineers and scientists focus more on the actual, impactful work.
System consolidation for maximum outcome
The combination of functions of siloed systems, including data integration, management, analytics/AI, visualization, data science, and automation makes it easier for data producers and consumers to collaborate.
Common standards and APIs offer compatibility, and convergence is even easier when a vendor operates across more segments. In short, choose platforms that can work with multiple stacks and consolidate the data across them.
History is repeating itself
Despite the quick modernization that the pandemic brought – from on-premises to the cloud – the same old problems come up, for example, data movement, transformation, metadata catalogues, and so on. For this reason, many “Wild West” startups have been created fueled by venture capital, each going after one specialization. However, most of them will soon disappear as industries mature and consolidate.
The rise of derivative and synthetic data
Data transformed, processed, aggregated, correlated, or operated on is called “derivative” data. These data are beneficial for test data management – creating, managing, and delivering test data to application teams.
Now, with new privacy laws and integrity issues, it’s becoming essential to obfuscate data even further. On the contrary, synthetic data are data that has not been generated from real operations and can help in situations where a lack of available user data doesn’t exist for small businesses or you want to train an AI model with vast data sets.
The choice is not low-code or high-code, but optimization
Bring low-code tools more into the spotlight because they not only drive the creation of apps from non-technical people but also increase the consumption of data and insights. At the same time, high-code tools need programmers and app developers who want templates to increase their flexibility to the maximum.
However, the choice isn’t between low-code and high-code. Instead, it has to be code optimization, focusing on the highest productivity and best business outcomes that give the available skill sets.
Alexandra Athanasakou, BI & Data Visualization, Education Manager