Automation seems to be key even in Hadoop

It seems the IT vendors have realized that automation is key even in big data and hadoop world as well. No longer, they want you to search for specialized big data skills in a competitive market and rather would like you to leave it to the machines. While Gartner has been bemoaning slow growth in production instances of hadoop, there is no denying of the fact that hadoop is here to stay and become the most important member of your data center. It augurs well for the companies to look for automation opportunities to accelerate their journey from enterprise data warehouse to hadoop and from POC to production.

To quote from a recent release by a product vendor:
The Impetus Data Warehouse Workload Migration solution addresses this objective by providing enterprises with the ability to identify and safely migrate data, extract/transform/load (ETL) processing and large scale analytics from the enterprise data warehouse (EDW) to a big data warehouse (BDW) based on Hadoop ecosystem technologies...

An automated migration toolset consisting of utilities that can be used... to automate the conversion of existing SQL- based scripts into equivalent HiveQL for execution in the Hadoop environment. It also allows users to run a set of data quality functions to standardize, cleanse and de-dupe data.
If desired, processed data can then be uploaded back to the source EDW for reporting purposes. Pre-built conversion logic is provided for IBM (DB2 and Netezza), Microsoft SQL Server, Oracle (Stored Procedures and SQL), and Teradata source data stores. Additionally, it includes a library of advanced data science machine learning algorithms for solving difficult data quality challenges.