日志收集服务器-scribe vs Flume

I read this post
about Cloudera’s Flume with much interest. Flume sounds
like a very interesting tool, not to mention that from Cloudera’s business
perspective it makes a lot of sense:

We’ve seen our customers have great success using Hadoop for processing their
data, but the question of how to get the data there to process in the first
place was often significantly more challenging.

Just in case you didn’t have the time to read about Flume yet, here’s a short
description from the GitHub project page
:

Flume is a distributed, reliable, and available service for efficiently
collecting, aggregating, and moving large amounts of log data. It has a simple
and flexible architecture based on streaming data flows. It is robust and fault
tolerant with tunable reliability mechanisms and many failover and recovery
mechanisms. The system is centrally managed and allows for intelligent dynamic
management. It uses a simple extensible data model that allows for online
analytic applications.

In a way this sounded a bit familiar. I thought I’ve seen something kind of
similar before: Scribe
:

Scribe is a server for aggregating streaming log data. It is designed to
scale to a very large number of nodes and be robust to network and node
failures. There is a scribe server running on every node in the system,
configured to aggregate messages and send them to a central scribe server (or
servers) in larger groups. If the central scribe server isn’t available the
local scribe server writes the messages to a file on local disk and sends them
when the central server recovers. The central scribe server(s) can write the
messages to the files that are their final destination, typically on an nfs
filer or a distributed filesystem, or send them to another layer of scribe
servers.

So my question is: how does Flume and Scribe compare
? What
are the major differences and what scenarios are good for one or the other?

If you have the answer to any of these questions, please drop a comment or send me an email
.

Update
: Looks like
I’ve failed to find this useful thread
, but thanks to this
comment

mistake is corrected:

1. Flume allows you to configure your Flume installation from a central
point, without having to ssh into every machine, update a configuration variable
and restart a daemon or two. You can start, stop, create, delete and reconfigure
logical nodes on any machine running Flume from any command line in your network
with the Flume jar available.

2. Flume also has centralised liveness monitoring. We’ve heard a couple of
stories of Scribe processes silently failing, but lying undiscovered for days
until the rest of the Scribe installation starts creaking under the increased
load. Flume allows you to see the health of all your logical nodes in one place
(note that this is different from machine liveness monitoring; often the machine
stays up while the process might fail).

3. Flume supports three distinct types of reliability guarantees, allowing
you to make tradeoffs between resource usage and reliability. In particular,
Flume supports fully ACKed reliability, with the guarantee that all events will
eventually make their way through the event flow.

4. Flume’s also really extensible - it’s really easy to write your own source
or sink and integrate most any system with Flume. If rolling your own is
impractical, it’s often very straightforward to have your applications output
events in a form that Flume can understand (Flume can run Unix processes, for
example, so if you can use shell script to get at your data, you’re golden).

— Henry Robinson

In the same thread, I’m reading about another tool
Chukwa

:

Chukwa is a Hadoop subproject devoted to large-scale log collection and
analysis. Chukwa is built on top of the Hadoop distributed filesystem (HDFS) and
MapReduce framework and inherits Hadoop’s scalability and robustness. Chukwa
also includes a exible and powerful toolkit for displaying monitoring and
analyzing results, in order to make the best use of this collected data.