цена: 13000 руб.
Для юридических лиц: 13000 руб.

Чтобы забронировать место авторизуйтесь на сайте или пройдите регистрацию.

Oracle Big Data 2017 Implementation Essentials 



Oracle
Код: 1Z0-449
продолжительность: 120 минут
Язык обучения: English
Тестовая система: Pearson VUE
Филиал: Москва

Содержание

Exam Topics:

Big Data Technical Overview

  • Describe the architectural components of the Big Data Appliance
  • Describe how Big Data Appliance integrates with Exadata and Exalytics
  • Identify and architect the services that run on each node in the Big Data Appliance, as it expands from single to multiple nodes
  • Describe the Big Data Discovery and Big Data Spatial and Graph solutions
  • Explain the business drivers behind Big Data and NoSQL versus Hadoop

Core Hadoop

  • Explain the Hadoop Ecosystem
  • Implement the Hadoop Distributed File System
  • Identify the benefits of the Hadoop Distributed File System (HDFS)
  • Describe the architectural components of MapReduce
  • Describe the differences between MapReduce and YARN
  • Describe Hadoop High Availability
  • Describe the importance of Namenode, Datanode, JobTracker, TaskTracker in Hadoop
  • Use Flume in the Hadoop Distributed File System
  • Implement the data flow mechanism used in Flume

Oracle NoSQL Database

  • Use an Oracle NoSQL database 
  • Describe the architectural components (Shard, Replica, Master) of the Oracle NoSQL database
  • Set up the KVStore
  • Use KVLite to test NoSQL applications
  • Integrate an Oracle NoSQL database with an Oracle database and Hadoop

Cloudera Enterprise Hadoop Distribution

  • Describe the Hive architecture
  • Set up Hive with formatters and SerDe
  • Implement the Oracle Table Access for a Hadoop Connector
  • Describe the Impala real-time query and explain how it differs from Hive
  • Create a database and table from a Hadoop Distributed File System file in Hive
  • Use Pig Latin to query data in HDFS
  • Execute a Hive query
  • Move data from a database to a Hadoop Distributed File System using Sqoop

Programming with R

  • Describe the Oracle R Advanced Analytics for a Hadoop connector
  • Use Oracle R Advanced Analytics for a Hadoop connector
  • Describe the architectural components of Oracle R Advanced Analytics for Hadoop
  • Implement an Oracle Database connection with Oracle R Enterprise

Oracle Loader for Hadoop

  • Explain the Oracle Loader for Hadoop
  • Configure the online and offline options for the Oracle Loader for Hadoop
  • Load Hadoop Distributed File System Data into an Oracle database

Oracle SQL Connector for Hadoop Distributed File System (HDFS)

  • Configure an external table for HDFS using the Oracle SQL Connector for Hadoop
  • Install the Oracle SQL Connector for Hadoop
  • Describe the Oracle SQL Connector for Hadoop Connector

Oracle Data Integrator (ODI) and Hadoop

  • Use ODI to transform data from Hive to Hive
  • Use ODI to move data from Hive to Oracle
  • Use ODI to move data from an Oracle database to a Hadoop Distributed File System using sqoop
  • Configure the Oracle Data Integrator with Application Adaptor for Hadoop to interact with Hadoop

Big Data SQL

  • Explain how Big Data SQL is used in a Big Data Appliance/Exadata architecture
  • Set up and configure Oracle Big Data SQL
  • Demonstrate Big Data SQL syntax used in create table statements
  • Access NoSQL and Hadoop data using a Big Data SQL query

Xquery for Hadoop Connector

  • Set up Oracle Xquery for Hadoop connector
  • Perform a simple Xquery using Oracle XQuery for Hadoop
  • Use Oracle Xquery with Hadoop-Hive to map an XML file into a Hive table

Securing Hadoop

  • Describe Oracle Big Data Appliance security and encryption features
  • Set up Kerberos security in Hadoop
  • Set up the Hadoop Distributed File System to use Access Control Lists
  • Set up Hive and Impala access security using Apache Sentry
  • Use LDAP and the Active directory for Hadoop access control

По вашему запросу ничего не найдено. Попробуйте изменить условия поиска.

Оставить отзыв
Об этом экзамене отзывов пока нет. Будьте первым.