CData Acquires Data Virtuality, Modernizing Enterprise Data Virtualization

Article contents

Share Article

Navigating legislative changes in Dutch Pension Administration

Article contents

The pension landscape in the Netherlands is in flux. The pension agreement reached at the end of 2020 concluded between the cabinet and employee and employer organizations, is a major step in the modernization of the pension system. The new pension rules are likely to take effect on 1 January 2023, after which pension providers will have more than four years to comply with the new legislation. 

The pension agreement presents many challenges for pension administrators. 

  • They must gain a better understanding of the specific situation of individual participants, so that they can operate in a more customer-oriented and personal way. 
  • Data must also be made more readily available so that participants can make the right choices on that basis. 
  • Pension providers will also have to collect and manage more and more personal and sensitive information. This puts them under a magnifying glass from bodies such as the Financial Markets Authority (AFM) and the Personal Data Authority (AP). 
  • And then there is the growing competition from some IT companies entering the pension market, increasing the pressure on traditional organizations to reduce costs.

Improved disclosure

In short, the new pension scheme is forcing administrators to make their information provision much more efficient and personalized. But how do you do that? 

Using data more widely, correctly dealing with sensitive data and reducing costs is quite a difficult set of requirements. Especially when you look at the current situation of many pension providers. There is often a large and diverse amount of data within an organization. Think of the (personal) data of participants, asset management data, and all kinds of data from policy advice and business-critical processes. Data is spread throughout the organization and the same data is often available in multiple locations and is stored in a wide variety of technologies. This makes it difficult to make data available quickly, in a controlled manner and in the larger volumes needed for more complex analyses.

Many pension administrators are finding that the limits of their current data architecture have been reached. Data in traditional data architectures is typically copied, transformed and integrated into a single physical database before it is offered for analysis, reporting and visualization. It’s nice and easy, but with today’s demand for data, your organization is running low in terms of delivery speed and data volumes, which means that it is  increasingly running into the limits of this approach. As a result, end users are taking matters into their own hands. This creates many elephant paths (i.e. unofficial) and much of the data usage bypasses the data platform,making it increasingly difficult to stay in control of the data and ensure that sensitive data is only used by authorized personnel. 

In addition, the risk that decisions are made based on the wrong data increases.

No more unnecessary data replication

The solution lies in a flexible data architecture. Or rather, a modern data architecture. In essence, of course, it is still about making data available in the right place, at the right time, and in the right form. But with a modern data architecture, you can manage large data volumes, spread across an increasingly complex IT landscape, much more easily. 

In this way, you can more quickly meet the demand from participants, partners, legislators and regulators, stakeholders, and your own organization to make data available in various forms and for various purposes. In addition, a modern data architecture allows you to always meet current requirements in the areas of data governance, security, and privacy. 

Questions about the origin and use of data can be answered quickly and reliably, and you can anonymize data on-the-fly. But how does the modern data architecture differ from the traditional data warehouse?

The answer is twofold. First, such an architecture is “designed for the cloud” and thereby makes optimal use of all the functional and technical possibilities of the cloud. 

Second, it uses advanced data virtualization. This is an access layer to all relevant data inside and outside the organization, without the need to copy that data first. With this you enable the data minimization principle, drastically reducing the number of unnecessary copy runs. And on-demand data is also possible while keeping the possibility to replicate it for performance, cleansing or historization purposes. In this way, you can manage the data more easily, have a better understanding of  where it comes from, and who owns that data. 

Also, you run less risk of misuse of data and prevent data leaks. Moreover, the speed of development increases, you can reduce the cost of storage, and  have less sensitive data to manage.

The combination of a cloud data platform and a data virtualization platform also makes it possible to connect new data sources and set up data services much faster, including for real-time data. Through data virtualization, you can offer data in all kinds of formats, without that data having to be duplicated all the time. The underlying cloud data platform also ensures the desired performance. Resources can be scaled up or down at any time. In this way, you can gain more insight into the current situation of individual participants through data from your business software (e.g. CRM and ERP), NoSQL databases, and cloud storage together in a single virtual environment.

Data virtualization also offers benefits for internal information and delivering data to external stakeholders. Previously, it was sufficient to develop reports and dashboards for internal users and reports for external parties such as suppliers and official agencies.

Nowadays, you want to be able to use data much more broadly. Think of forms such as self-service BI, where business users themselves have the ability to develop reports, or data scientists being able to look for as-yet unknown patterns and trends in data that can be used to improve business operations. Or supplier/customer driven BI, where external parties themselves analyze certain parts of the organization’s data. A data virtualization platform allows you to access the same data through different interfaces.

More trust in data and more control

To ensure that your users can also find the right data, a data virtualization platform has a so-called data catalog. This shows exactly what data exists, what it means and where it comes from. This not only makes it easier to find data for users, but also creates more trust in the data because the origin is clear. 

However, you want to avoid incorrect handling of all the data in your organization. That’s very easy with a data virtualization platform, because you are working from an access layer and thus in one central location to record all usage. 

Through this smart form of audit logging, you know exactly who uses certain data. At a glance, you can check whether any unauthorized use of data is being made. Of course, certain data by definition is shielded or only available to certain groups. A data virtualization platform offers a wide range of authorization options and applies them on-the-fly. This way, everyone can look at the same dataset, but with the authorizations that belong to their authority.

Hybrid approach

So is a data architecture that is based on data warehouse and ETL always a failure and one that purely relies on data virtualization always the right answer? 

Depending on the use case, one approach might be more desirable than the other. But past experience has shown that in most cases a combination is needed. Therefore, a hybrid approach – using data virtualization and physical storage in a central cloud data platform – works best. Where it really can’t otherwise. For example because of historization, take the approach where data is physically stored on the cloud data platform. 

In other cases, you use data virtualization to access and integrate data sources virtually. A solution such as Data Virtuality provides the capabilities to smartly shape that hybrid approach. This product combines data virtualization with ETL/ELT, standard patterns to quickly copy and historize data, so that the hybrid approach is supported by a single toolset. This allows for rapid switching between the two, for example, if data usage or data volumes change, but also for a flexible combination of both approaches in a single query or data model. In addition, there is no exotic knowledge required, as the product is 100% SQL-based. This keeps the architecture simple and cost-effective.

The future of pension administration

Advanced data virtualization makes it possible to develop a flexible and controlled data architecture that allows you to use and offer data more broadly, secure correct use of the data and reduce costs. Data is offered in one central location, but without the need for that data to actually be physically present at that location. In this way, you can support the growing need within the organization to use data faster and in as many ways as possible. All while taking care of all important issues such as security, governance and privacy. That is the future of the modern pension administrator. In fact, there are already pension administrators such as PGGM that have started  the move to the modern data architecture already. 

So the question is not whether you should take the step, but rather when to start!

How to get your company ready?

If you are interested in discovering how Data Virtuality can help your organization successfully tackle these new legislative challenges get in touch with us for a demo session tailored to your specific needs. 

We’ve worked together with other pension fund administrators to enable them to leverage the full potential of their data, providing a single source of truth platform and we can do the same for you.

Our trusted local partner Axians has a deep understanding of the local market and a wealth of experience and knowledge in adapting our solutions to the specifics of each company.

More interesting articles

Data Virtuality brings enterprise data virtualization capabilities to CData, delivering highly-performant access to live data at any scale.
Discover how integrating data warehouse automation with data virtualization can lead to better managed and optimized data workflows.
Discover how our ChatGPT powered SQL AI Assistant can help Data Virtuality users boost their performance when working with data.
While caching offers certain advantages, it's not a one-size-fits-all solution. To comprehensively meet business requirements, combining data virtualization with replication is key.
Explore the potential of Data Virtuality’s connector for Databricks, enhancing your data lakehouse experience with flexible integration.
Generative AI is an exciting new technology which is helping to democratise and accelerate data management tasks including data engineering.