If your organization is about to enter the world of big data, you not only need to decide whether Apache Hadoop is the right platform to use, but also which of its many components are best suited to your task. This field guide makes the exercise manageable by breaking down the Hadoop ecosystem into short, digestible sections. You'll quickly understand how Hadoop's projects, subprojects, and related technologies work together.
Each chapter introduces a different topic—such as core technologies or data transfer—and explains why certain components may or may not be useful for particular needs. When it comes to data, Hadoop is a whole new ballgame, but with this handy reference, you'll have a good grasp of the playing field.
- Core technologies—Hadoop Distributed File System (HDFS), MapReduce, YARN, and Spark
- Database and data management—Cassandra, HBase, MongoDB, and Hive
- Serialization—Avro, JSON, and Parquet
- Management and monitoring—Puppet, Chef, Zookeeper, and Oozie
- Analytic helpers—Pig, Mahout, and MLLib
- Data transfer—Scoop, Flume, distcp, and Storm
- Security, access control, auditing—Sentry, Kerberos, and Knox
- Cloud computing and virtualization—Serengeti, Docker, and Whirr
To view this DRM protected ebook on your desktop or laptop you will need to have Adobe Digital Editions installed. It is a free software. We also strongly recommend that you sign up for an AdobeID at the Adobe website. For more details please see FAQ 1&2. To view this ebook on an iPhone, iPad or Android mobile device you will need the Adobe Digital Editions app, or BlueFire Reader or Txtr app. These are free, too. For more details see this article.
|Size: ||7.0 MB|
|Publisher: ||O'Reilly Media|
|Date published: || 2015|
|ISBN: ||9781491947883 (DRM-EPUB)|
|Read Aloud: ||not allowed|