Explore how you can use the Data Virtuality Platform in different scenarios.
Learn more about the Data Virtuality Platform or to make your work with Data Virtuality even more successful.
Insights on product updates, useful guides and further informative articles.
Find insightful webinars, whitepapers and ebooks in our resource library.
Stronger together. Learn more about our partner programs.
Read, watch and learn what our customers achieved with Data Virtuality.
In our documentation you will find everything to know about our products
Read the answers to frequently asked questions about Data Virtuality.
In this on-demand webinar, we look at how a modern data architecture can help data scientists to be faster and to work more efficiently.
Learn more about us as a company, our strong partner ecosystem and our career opportunities.
How we achieve data security and transparency
Feel free to use our live chat in case you cannot find your question and answer here.
The Data Virtuality Platform uniquely combines data virtualization and data replication and thereby provides data teams the flexibility to always choose the right method for the specific requirement. It provides high-performance real-time data integration through smarter query optimization and data historization and master data management through advanced ETL in a single tool. The Data Virtuality Platform is an enabler for modern data architectures such as Data Fabric and Data Mesh by providing the self-service capabilities and data governance features that are indispensable for these frameworks.
Data virtualization allows to easily connect, deliver, and access data without extensive technical knowledge. Data can be manipulated, joined, and calculated independently from its source format and physical location. Thereby, time-to-solution can be improved drastically (by up to 5 times). New ideas can be tested through rapid prototyping, giving more flexibility and allowing faster test cycles before moving to productive environments.
Learn more about data virtualization in this blog post Data Virtualization: The Complete Overview
The Data Virtuality Platform is a horizontal solution that can be used across multiple industries. The companies using our solution have the same need: easily integrate data from multiple sources so it can be quickly turned into insights. Here are some examples, more can be found in our solutions section.
Financial Services: Built on top of the existing or as a new infrastructure, the Data Virtuality Platform can help to quickly adapt to the ever changing regulatory requirements such as BCBS 239, SFTR, Solvency II, and GDPR and drive digital use cases.
Healthcare: An integrative data platform that aggregates data from different systems for transparency and automation of workflows can be built with the Data Virtuality Platform. Better management of patient-flow helps to reduce costs, lower staff turnover rates, and more efficient management/planning of the bed occupancy.
Retail and E-Commerce: Customer 360, lean inventory management, personalized customer discounts, data privacy protection, and many more use cases can be enabled with the Data Virtuality Platform by bridging data silos across hybrid- and multi-cloud environments, integrating big data, IoT, streaming, etc., and using data management capabilities for data governance.
Companies of different sizes and industries with data integration or delivery challenges buy our solution. What differentiates us in the market is that at Data Virtuality, we believe that data integration and delivery should be available and affordable not only for enterprises but also for smaller sized companies. That’s why we work very closely with both segments, enterprises and SMEs (small- and medium-sized enterprises) to help them meet their needs in regards to data.
Yes, we offer connectors to the different clouds of Amazon, Google, and Microsoft Azure. You can find an extensive list of possible data sources here.
We can connect to all popular BI tools such as PowerBI, Tableau, Looker, and Qlik. Please find the extensive list here.
Yes, we do support Rest API and you can expose data through Rest API in the Data Virtuality Platform.
The Data Virtuality Platform enables logical and distributed architectures that are essential to the data mesh concept by combining the two technologies data virtualization and automated ETL. Data governance and security features that are needed in a data mesh environment are also provided within the Data Virtuality Platform. You can learn more about how the Data Virtuality Platform enables the Data Mesh framework here.
By combining the two technologies, data virtualization and automated ETL, and putting a uniformed metadata layer on top of them, the Data Virtuality Platform enables exactly the dynamic and holistic data management architecture needed to transform the data fabric from a concept to reality. Learn more here.
Yes, we do support flat files.
You can find technical documents and user guides under Docs & Support on our website.
Our pricing is scalable depending on the use case. It is based on the number of connector types and servers used. Please contact us at [email protected] if you would like to get an individual pricing for your use case.
The set up depends on the scope of your project and can range anywhere from a couple of weeks to a couple of months. Connecting data sources is simple and takes minutes when data source credentials are available. Most connectors can be connected by yourself. Some are available at request and will be installed separately.
The Data Virtuality Platform comes with features that enable you to ensure data governance. You can use metadata repositories to improve master data management and increase the transparency, accountability and auditability with automatic data lineage. Furthermore, 3rd party tools such as Collibra and Infogix can be easily integrated.
For a full list of features please check our Data Virtuality Platform page here where you can also download our product sheet for all technical details.
The Data Virtuality Platform is an advanced solution for sophisticated data management use cases. Non-technical users are able to quickly acquire the knowledge to operate the solution.
Recommended knowledge:
During the 14 day Proof-of-Concept period you can schedule training sessions with our Solution Engineer (optional).
Where to learn more about Data Virtuality:
Support:
Contact us:
Data Virtuality uses the best of ETL/ELT and data virtualization.
The Data Virtuality Platform uses both technologies: data virtualization and ETL/ELT. We combine the two distinct technologies, virtualization and replication, for the best possible performance. The architecture is named by Gartner as the future of data warehousing.
Interested to learn more about data integration technologies? Take a look into our free ebook here.
Our use cases vary as Data Virtuality Platform can help with any data integration or data delivery challenges. But to name the main ones:
Data Virtuality acts as a middle layer processing, but not storing data. All data remains either in your data sources or in your analytical storage.
You can use additional instances for purposes such as QA or development. The synchronization between the different instances can be done using our sync tool. We published a blog post about our sync tool, read it here.
The minimum recommended hardware requirements are as follows:
Our desktop application works on all common operating systems, such as Windows, MacOS or Linux.
The Data Virtuality Platform can be deployed:
For large data volumes, we use a variety of approaches to handle them efficiently. Firstly, we use streaming on all data sources supporting it – this means that we do not do a full table scan, but rather only read the first batch of the data. Additionally, we use the concept of push down, so for example in a filtered statement, the data source will deliver only the filtered data. Lastly, we have different techniques for efficient distributed joining, such as Merge Join (on sorted keys) or Dependent Join (result of smaller table transported to filter the result of the bigger table).
The metadata from all connected data sources, as well as the logical data model, is made available in Data Virtuality. It can be searched or exposed to 3rd party tools.
We operate a multitenancy architecture and offer isolated user environments depending on a tenant’s role.
With our multiple virtual databases and isolated user environments feature, you can create additional databases within one system. This is especially helpful if you are working in a big company with several branches and want to use one system with multiple customized virtual databases (VDBs).
In case you are looking for a separation due to staging purposes, you can also easily set up a dedicated staging instance and seamlessly sync with production environments – just as needed.
A role-based permission system is available in the Data Virtuality Platform that can optionally be connected to your existing LDAP repository or Active Directory (AD). Each role can be assigned to a set of permissions to access data sources or the logical data models. Additional security features, such as row-level security or column masking can also be configured per role. Also, other authentication and permission management tools such as PlainID can be easily integrated.
Data Virtuality uses an SQL dialect based on ANSI Standard and PostgreSQL dialect with numerous extensions.
Data Virtuality provides CDC, but not for all data sources. For data sources where CDC is not available, we have an alternative approach for near real-time access. This is dependent on the data included in the table. To learn more, please contact us at [email protected]. Our sales team will jump on a quick call to answer your question.
Instant setup. No credit card needed.
Leipzig
Katharinenstrasse 15 | 04109 | Germany
Munich
Trimburgstraße 2 | 81249 | Germany
San Francisco
2261 Market Street #4788 | CA 94114 | USA
Follow Us on Social Media
Our mission is to enable businesses to leverage the full potential of their data by providing a single source of truth platform to connect and manage all data.