Moving data into Oracle Autonomous Data Warehouse Cloud (ADWC) – Configuration Guide
Moving data into Oracle Autonomous Data Warehouse Cloud (ADWC)
|Data Virtuality Pipes||Version: 2018-06|
|Data Virtuality Pipes Professional||Version: 2018-06|
|Data Virtuality Logical Data Warehouse||Version: 2.0.46|
|Oracle Database JDBC Driver||ojdbc7.jar - 22.214.171.124 JDBC Drivers & UCP for Cloud Release [Note: This is different from the RELEASE version of the JDBC driver as it contains the fix to support TLSv1.2]|
This document assumes that the Autonomous Data Warehouse Cloud (ADWC) has been provisioned and the corresponding wallet zip file has been downloaded. For the Oracle documentation to provision ADWC please check here.
Log into Data Virtuality Pipes here. If you do not have an account, start your 14 days trial here. Follow the instructions in this Data Virtuality document to configure your Oracle ADWC as the target storage of your data.
First, set up Oracle ADWC as the target storage:
Choose Oracle ADWC as the destination and enter the credentials. Enter the connection details and upload your wallet in Pipes. Click “Connect” to finalize the first step.
Second, connect data sources (cloud-based SaaS as well as databases): Click “Plus” to connect with the desired data source, e.g. Redshift and enter the credentials. In the advanced options, click on “Edit” and choose, for example SSL, if needed. The connect wizard is finished and the data from the original data source(s) can now easily be moved into Oracle ADWC.
Third, create data pipeline: Go back to the dashboard and choose data source. Locate the table that needs to be moved, e.g. ad_performance table and take a preview of the data which will be moved. They will look the same in Oracle ADWC after moving the data is finalized.
There are some load options under “Load Settings”. Check “Incremental” in the scheduling option because otherwise, the whole table will be moved every time. The value in the criteria field will be compared with the latest value in Oracle ADWC to see whether the value should be rolled over or not.
The field “ID” is optional. If this field is set, Data Virtuality Pipes decides whether the data will append the role or replace the existing role with the same ID.
Name the pipe and click “Create Pipe”.
Lastly, review the execution steps: Mark the newly created pipe and review the details. Check the status of the execution, the duration and the time when it was done. Review the table that was moved into Oracle ADWC by going to Oracle SQL Developer. Click on the table that was moved, e.g. redshift_ad_performance. Look at the data that had been moved.
1. Which databases and APIs are supported?
2. What is the supported frequency of data movement?
- Data Virtuality Pipes: You can determine how often the data should be moved yourself. In the third step “Create Pipes to integrate the data”, you have the option to schedule the frequency of data movement under the load settings. We recommend to check the incremental button, otherwise the whole table will be moved every time.
- Data Virtuality Logical Data Warehouse: With the sophisticated scheduling of Data Virtuality Logical Data Warehouse, you can replicate your data every minute, if needed. Dependencies on other replication jobs are also configurable. Watch the following video to get more detailed insights: https://vimeo.com/175877949
3. What types of data movement are supported?
Data Virtuality supports multiple data movement types depending on the different use cases. Learn more about the specific algorithmic concepts behind each type and usage: https://datavirtuality.com/docs/#doc=User%20Guide&version=2.0&page=Replication_and_Materialization.html
The following topics are covered in this user guide:
- Complete Materialization
- Incremental Replication (Movement)
- Batch Replication (Movement)
- History Update (slowly-changing-dimension)
- Copy Over
- Upsert Update
- Overview of Replication (Movement) Types
Data Management Platform (DMP)
Data Virtuality Logical Data Warehouse Data Sheet
Data Virtuality Pipes One Pager
Data Virtualiy Logical Data Warehouse One Pager
Easy Data Integration Within Minutes: Data Virtuality Pipes
Easy Data Replication Within Minutes: Data Virtuality Pipes
About the Webinar
Dr. Nick Golovin, CEO & Founder of Data Virtuality, is launching our new product Data Virtuality Pipes. Learn more about how to replacte all your data into your analytical database with Pipes.
What you will learn
- Quick and easy Pipes set-up
- How to extract and load data into one target storage and analyze it with your favourite BI Tool
- Benefits and functionality of Pipes
- Why should data-driven starters use Pipes?
- Security and data ownership aspects while using Pipes
Combine Data From Your Website, CRM & ERP
Data Virtuality and OWOX BI: How to combine data from your website, CRM, ERP and other systems
About the Webinar
Dr. Nick Golovin, CEO & Founder of Data Virtuality and Maxim Ostapenko, Product Manager at OWOX, illustrating specific examples of system integrations.
What you will learn
- What are the difficulties in combining data from multiple services?
- How to combine data into a single system using Data Virtuality and OWOX BI?
- Examples of imports and exports to and from Google BigQuery
- Examples of informative reports and charts based on complete data from multiple sources
The Data Virtuality Logical Data Warehouse Approach
Automating The Data Warehouse
Data Virtuality Explained in 1 Minute
Data Virtuality explained in 1 minute
How Data Virtuality works - explained in just one minute
About the Video
Here we show you how Data Virtuality actually works. Every data source in all your BI tools!
What you will learn
- Data Virtuality explained in a fast and easy manner
- How data get from A to B
- How data will become accessible for your BI tools