DIFFERENT APPROACHES COMPARED
According to Gartner the majority of data warehouse projects fail. Current data replication approaches using classical Extract-Transform-Load (ETL) processes fail to deliver results due to their inability to meet modern data infrastructure requirements.
Projects with classical ETL based data integration
project failure rate
6-9 months development cycles
ETL-based data integration is a labour-intense manual process following a strict 4-step procedure:
(3) develop and
leading to several months of implementation time.
Inflexible to changes
Manually develop and adapt connections to data sources reduces the speed of changing the data infrastructure. The greater the number of connected systems, the more complicated and inflexible data infrastructure becomes.
The concept of moving large amounts of data in a batch-oriented manner limits the possibilities to refresh data in the target storage. Data is at least a few hours old. Therefore, live data analysis or task automation is not possible.
Projects with Data Virtuality Logical Data Warehouse
project success rate
Complete set up in 1 day
With more than 200+ connectors Data Virtuality enables users to access data from any data source, API, target storage and BI tool in minutes. All data instantly appear in one virtual layer accessible with any BI tool.
Agile data infrastructure
When new requirements arise users can replicate new data with Data Virtuality in minutes - without involving developers. We provide one platform to connect, transform, query and join data from multiple data sources in one language: SQL.
Data Virtuality unifies all connected data in one virtual layer. This enables live-data access for real-time reporting and task automation. Data can also be moved for highest performances. All this by only using SQL.
Data Virtuality Logical Data Warehouse marries two distinct technologies to create an entirely new manner of integrating data. The combination of data virtualization and next generation ETL enables an agile data infrastructure with high performance.
Connect all your data sources, target storages and BI tools. Manage the flow of your data across all systems using SQL.
DataVirtuality is a java-based software solution that connects to your data sources, target storages and BI tools managing the flow of data between the systems. DataVirtuality Server can be deployed on-premise and in the cloud.
Replicate data from more than 200+ data sources. Connect to JDBC, ODBC, RestAPI, SOAP, XML, JSON and CSV. Data Virtuality’s virtualization engine enables to query and join live-data in SQL from any connected data source.
Use any target storage to speed up your queries by extracting data from Data Virtuality’s virtual layer. Using SQL queries, Data Virtuality builds fully automated ETL tracks to load data into your target storage.
Connect your favorite Business Intelligence tools to Data Virtuality using JDBC or ODBC. Data Virtuality acts as a unified data layer of all connected data sources.
Matthias Korn (Head of Solutions Engineering at Data Virtuality) explains Data Virtuality's architecture.
Select from over more than 200 connectors. Access your data in 5min.
Quickly connect to your data sources using Data Virtuality connectors. Simply fill out the parameters in the wizard and access your data in only 5 minutes.
Data Virtuality transforms connected data sources to SQL tables. Regardless the source file format. Join and query all data from connected data sources using SQL.
Matthias Korn (Head of Solution Engineering @ Data Virtuality) explaining how to connect to data sources using Data Virtuality.
Data Virtuality is offering a variety of features that make it possible to build an agile data infrastructure.
Data Virtuality puts one virtual layer around connected data sources enabling users to query real-time data. Perform joins across multiple data sources on the fly.
With the ability to write to connected data sources Data Virtuality Server enables you to trigger actions based on data.
Use and analyze historical data.
Create a company-wide flexible data model. Based on virtual views all data can be modelled and changed to your business needs on the fly.
Access data and schedule updates from any web service using Data Virtuality’s CSV or XML/JSON query builder. Data Virtuality automatically applies a relational structure and enables you to modify the output format with a few clicks. Easily add data from web services to your data model by creating views.
Set rules for automated decision making and execute them. Automate data workflows and edit data in your data sources with the SQL queries you built.
Set up flexible schedules and choose from a variety of scheduling options: intervals down to one minute and schedules dependent on other jobs.
Gain full control with Data Virtuality’s transparent overview of all schedules and jobs.
Get data from binary logs without affecting the transactional performance of your data sources. Synchronise this data to your target storage.
Data Virtuality has a sophisticated user management system consisting of role based permissions. Assign certain permissions to roles and add them to users. Track actions by users in the audit log. Import existing permission rules from Active Directory and LDAP.
What our customers say
Being able to work productively right away is a big advantage for us compared to the usual month-long implementation times of traditional data warehouse projects.
Head of BI, Finance and HR
Before Data Virtuality we had to go into each of our databases, pull the data and stick it somehow with Excel together. Now we can access data at all times, set up reports in Data Virtuality, schedule data import jobs and make the reports available to everyone who needs it.
Senior Business Intelligence Manager
We can make considerably better use of our data systems. The transparency of information is significantly higher. Decision making is no longer based on intuition, but driven by data and is, therefore, well-founded.
Head of eCommerce