taocoding


  • 首页

  • 关于

  • 标签

  • 分类

  • 归档

主备切换之State模式实现

发表于 2011-04-30 | 分类于 设计模式

最近项目中需要实现一个主备切换的功能,通过分析可以得出也就两个状态的切换,使用if/switch之类的语句可以轻松搞定,但是为了学习并实践State模式,这里采取了一个State模式的实现:

// HAState.h: interface for the HAState class.
//
//////////////////////////////////////////////////////////////////////

#if !defined(AFX_HASTATE_H__410262B3_3FEB_44B3_BFA7_04C4BEBCE636__INCLUDED_)

#define AFX_HASTATE_H__410262B3_3FEB_44B3_BFA7_04C4BEBCE636__INCLUDED_

#if _MSC_VER > 1000

#pragma once

#endif // _MSC_VER > 1000

#include <iostream>
class HASwitch;
enum HASTATE
{
STANDBY,
PRIMARY,
FAILURE
};
class HAState
{
public:
HAState();
virtual ~HAState();

virtual HASTATE getState();
virtual bool standby2primary( HASwitch* pSwitch ) = 0;
virtual bool primary2standby( HASwitch* pSwitch ) = 0;

protected:
void changeState( HASwitch pSwitch, HAState pNewState );
protected:
HASTATE m_euState;
};

#endif // !defined(AFX_HASTATE_H__410262B3_3FEB_44B3_BFA7_04C4BEBCE636__INCLUDED_)
// HAState.cpp: implementation of the HAState class.
//
//////////////////////////////////////////////////////////////////////

#include “HAState.h”

#include “HASwitch.h”
//////////////////////////////////////////////////////////////////////
// Construction/Destruction
//////////////////////////////////////////////////////////////////////
HAState::HAState():m_euState( FAILURE )
{
std::cout << “Call “ << FUNCTION << std::endl;
}
HAState::~HAState()
{
std::cout << “Call “ << FUNCTION << std::endl;
}
void HAState::changeState( HASwitch pSwitch, HAState pNewState )
{
std::cout << “Call “ << FUNCTION << std::endl;
pSwitch->changeState( pNewState );
}
HASTATE HAState::getState()
{
std::cout << “Call “ << FUNCTION << std::endl;
return m_euState;
}
 

// PrimaryState.h: interface for the PrimaryState class.
//
//////////////////////////////////////////////////////////////////////

#if !defined(AFX_PRIMARYSTATE_H__DFA509B1_9344_4750_A582_5637A65DF2A8__INCLUDED_)

#define AFX_PRIMARYSTATE_H__DFA509B1_9344_4750_A582_5637A65DF2A8__INCLUDED_

#if _MSC_VER > 1000

#pragma once

#endif // _MSC_VER > 1000

#include <boost/noncopyable.hpp>

#include “HAState.h”
class PrimaryState : public HAState, public boost::noncopyable
{
public:
PrimaryState();
virtual ~PrimaryState();
static PrimaryState& instance();
virtual bool standby2primary( HASwitch pSwitch ) ;
virtual bool primary2standby( HASwitch
pSwitch ) ;
private:
static PrimaryState s_Instance;
};

#endif // !defined(AFX_PRIMARYSTATE_H__DFA509B1_9344_4750_A582_5637A65DF2A8__INCLUDED_)
 

// PrimaryState.cpp: implementation of the PrimaryState class.
//
//////////////////////////////////////////////////////////////////////

#include “PrimaryState.h”

#include “StandbyState.h”

#include “HASwitch.h”
//////////////////////////////////////////////////////////////////////
// Construction/Destruction
//////////////////////////////////////////////////////////////////////
PrimaryState PrimaryState::s_Instance;
PrimaryState::PrimaryState()
{
std::cout << “Call “ << FUNCTION << std::endl;
m_euState = PRIMARY;
}
PrimaryState::~PrimaryState()
{
std::cout << “Call “ << FUNCTION << std::endl;
}
bool PrimaryState::standby2primary( HASwitch* pSwitch )
{
std::cout << “Call “ << FUNCTION << std::endl;
std::cout << “Current state is already primary” << std::endl;

return true;

}
bool PrimaryState::primary2standby( HASwitch* pSwitch )
{
std::cout << “Call “ << FUNCTION << std::endl;
std::cout << “Begin switch to standby…” << std::endl;

std::cout &lt;&lt; "Do something" &lt;&lt; std::endl;
changeState( pSwitch, &amp;StandbyState::instance() );

std::cout &lt;&lt; "End switch to standby..." &lt;&lt; std::endl;

return true;

}
PrimaryState& PrimaryState::instance()
{
std::cout << “Call “ << FUNCTION << std::endl;
return s_Instance;
}
// StandbyState.h: interface for the StandbyState class.
//
//////////////////////////////////////////////////////////////////////

#if !defined(AFX_STANDBYSTATE_H__028C47F8_1A92_4854_A93D_2AECB6A17DF6__INCLUDED_)

#define AFX_STANDBYSTATE_H__028C47F8_1A92_4854_A93D_2AECB6A17DF6__INCLUDED_

#if _MSC_VER > 1000

#pragma once

#endif // _MSC_VER > 1000

#include <boost/noncopyable.hpp>

#include “HAState.h”
class StandbyState : public HAState, public boost::noncopyable
{
public:
StandbyState();
virtual ~StandbyState();
static StandbyState& instance();
virtual bool standby2primary( HASwitch pSwitch ) ;
virtual bool primary2standby( HASwitch
pSwitch ) ;
private:
static StandbyState s_Instance;
};

#endif // !defined(AFX_STANDBYSTATE_H__028C47F8_1A92_4854_A93D_2AECB6A17DF6__INCLUDED_)
// StandbyState.cpp: implementation of the StandbyState class.
//
//////////////////////////////////////////////////////////////////////

#include “StandbyState.h”

#include “PrimaryState.h”

#include “HASwitch.h”
//////////////////////////////////////////////////////////////////////
// Construction/Destruction
//////////////////////////////////////////////////////////////////////
StandbyState StandbyState::s_Instance;
StandbyState::StandbyState()
{
std::cout << “Call “ << FUNCTION << std::endl;
m_euState = STANDBY;
}
StandbyState::~StandbyState()
{
std::cout << “Call “ << FUNCTION << std::endl;
}
bool StandbyState::standby2primary( HASwitch* pSwitch )
{
std::cout << “Call “ << FUNCTION << std::endl;
std::cout << “Begin switch to primary…” << std::endl;

std::cout &lt;&lt; "Do something" &lt;&lt; std::endl;
changeState( pSwitch, &amp;PrimaryState::instance() );

std::cout &lt;&lt; "End switch to primary..." &lt;&lt; std::endl;

return true;

}
bool StandbyState::primary2standby( HASwitch* pSwitch )
{
std::cout << “Call “ << FUNCTION << std::endl;
std::cout << “Current state is already standby” << std::endl;

return true;

}
StandbyState& StandbyState::instance()
{
std::cout << “Call “ << FUNCTION << std::endl;
return s_Instance;
}
// HASwitch.h: interface for the HASwitch class.
//
//////////////////////////////////////////////////////////////////////

#if !defined(AFX_HASWITCH_H__AC109E28_51E4_46AE_BC07_9E93FA747B65__INCLUDED_)

#define AFX_HASWITCH_H__AC109E28_51E4_46AE_BC07_9E93FA747B65__INCLUDED_

#if _MSC_VER > 1000

#pragma once

#endif // _MSC_VER > 1000

#include <boost/noncopyable.hpp>

#include “HAState.h”
class HASwitch : public boost::noncopyable
{
friend class HAState;
public:
HASwitch();
virtual ~HASwitch();
static HASwitch& instance();
HASTATE getState() const;
boolstandby2primary();
boolprimary2standby();
private:
void changeState( HAState pNewState );
private:
static HASwitch g_Instance;
HAState
m_pCurState;
};

#endif // !defined(AFX_HASWITCH_H__AC109E28_51E4_46AE_BC07_9E93FA747B65__INCLUDED_)
 

// HASwitch.cpp: implementation of the HASwitch class.
//
//////////////////////////////////////////////////////////////////////

#include “HASwitch.h”

#include “HAState.h”

#include “PrimaryState.h”

#include “StandbyState.h”
//////////////////////////////////////////////////////////////////////
// Construction/Destruction
//////////////////////////////////////////////////////////////////////
HASwitch HASwitch::g_Instance;
HASwitch::HASwitch()
{
m_pCurState = &StandbyState::instance();
std::cout << “Call “ << FUNCTION << std::endl;
}
HASwitch::~HASwitch()
{
std::cout << “Call “ << FUNCTION << std::endl;
}
HASTATE HASwitch::getState() const
{
std::cout << “Call “ << FUNCTION << std::endl;
return m_pCurState->getState();
}
bool HASwitch::standby2primary()
{
std::cout << “Call “ << FUNCTION << std::endl;
return m_pCurState->standby2primary(this);
}
bool HASwitch::primary2standby()
{
std::cout << “Call “ << FUNCTION << std::endl;
return m_pCurState->primary2standby(this);
}
void HASwitch::changeState( HAState* pNewState )
{
std::cout << “Call “ << FUNCTION << std::endl;
m_pCurState = pNewState;
}
HASwitch& HASwitch::instance()
{
return g_Instance;
}
测试代码:

#include <iostream>

#include <assert.h>

#include “HASwitch.h”
int main(int argc , char ** argv)
{
assert( HASwitch::instance().getState() == STANDBY );
HASwitch::instance().standby2primary();
assert( HASwitch::instance().getState() == PRIMARY );
HASwitch::instance().primary2standby();
assert( HASwitch::instance().getState() == STANDBY );
return 0;
}
 

thrift之Hello

发表于 2011-04-17 | 分类于 系统编程

Thrift是一个开发跨语言服务的软件框架。编写thrift文件,通过自带的代码生成引擎即可生成各种语言(C++,Java,Python,PHP,Ruby,Erlang,C#等)的对应代码,下面以最经典的hello为例讲述,如何通过thrift编写跨语言的RPC程序:
1编写thrift文件,保存为hello.thrift:

service Hello
{
void Hello()
}

2生成cpp和py框架文件
在hello.thrift文件所在目录执行:
thrift -r –gen cpp hello.thrift
thrift -r –gen py hello.thrift
会在当前目录下面产生两个文件夹,分别为gen-cpp和gen-py,
3编写cpp服务器端代码,拷贝gen-cpp目录中的Hello_server.skeleton.cpp到当前目录,重命名为CppServer.cpp,修改如下:

#include “Hello.h”

#include <protocol/TBinaryProtocol.h>

#include <server/TSimpleServer.h>

#include <transport/TServerSocket.h>

#include <transport/TBufferTransports.h>

using namespace ::apache::thrift;
using namespace ::apache::thrift::protocol;
using namespace ::apache::thrift::transport;
using namespace ::apache::thrift::server;
using boost::shared_ptr;
class HelloHandler : virtual public HelloIf
{
public:
HelloHandler()
{
// Your initialization goes here
}
void Hello()
{
// Your implementation goes here
printf(“Hello,Thrift/n”);
}
};
int main(int argc, char **argv)
{
int port = 9090;
shared_ptr handler(new HelloHandler());
shared_ptr processor(new HelloProcessor(handler));
shared_ptr serverTransport(new TServerSocket(port));
shared_ptr transportFactory(new TBufferedTransportFactory());
shared_ptr protocolFactory(new TBinaryProtocolFactory());
TSimpleServer server(processor, serverTransport, transportFactory, protocolFactory);

server.serve();

return 0;

}
4编写python客户端代码,PythonClient.py如下:

#!/usr/bin/env python
import sys
sys.path.append(‘./gen-py’)
from hello import Hello
from hello.ttypes import *
from thrift import Thrift
from thrift.transport import TSocket
from thrift.transport import TTransport
from thrift.protocol import TBinaryProtocol

Make socket

transport = TSocket.TSocket(‘localhost’, 9090)

Buffering is critical. Raw sockets are very slow

transport = TTransport.TBufferedTransport(transport)

Wrap in a protocol

protocol = TBinaryProtocol.TBinaryProtocol(transport)

Create a client to use the protocol encoder

client = Hello.Client(protocol)

Connect!

transport.open()

Call Server services

client.Hello()
5编写cpp端Makefile如下:

BOOST_DIR = /usr/local/include/boost/
THRIFT_DIR = /usr/local/include/thrift
LIB_DIR = /usr/local/lib
GEN_SRC = ./gen-cpp/hello_types.cpp ./gen-cpp/Hello.cpp
default: server
server: CppServer.cpp
g++ -o CppServer -I${THRIFT_DIR} -I${BOOST_DIR} -I./gen-cpp -L${LIB_DIR} -lthrift CppServer.cpp ${GEN_SRC}
clean:
$(RM) -r CppServer

6编译cpp端,直接make即可生成相应的可执行文件,python端可以直接运行。

至此一个最简单的跨语言PRC程序即完成了,很简单吧。

CentOS5.5下scribe写入数据到HDFS配置方法

发表于 2011-04-16 | 分类于 系统编程

1按照CentOS 5.5 下配置Hadoop 0.21单节点
一文中的方法配置Hadoop

2编译scribe,支持hdfs
2.1下载thrift,libevent,boost等库,可以都下载最新版本,基本上都是make & make install
2.2下载最新版scribe-2.2,之前在网上看到说scribe有很多bug,必须在当前开发版本才能写入HDFS,试了好久没成功,也以为确实是代码的问题,现在发现不是这个问题,直接下载该版本
2.3在scribe源码包里面bootstrap.sh,按照文档上的说法是可以一步到位的即

./configure CPPFLAGS=”-DHAVE_INTTYPES_H -DHAVE_NETINET_IN_H -I/data/billowqiu/jdk1.6.0_25/include -I/data/billowqiu/jdk1.6.0_25/include/linux -I/data/billowqiu/hadoop-0.20.1+169.89/src/c++/libhdfs” LDFLAGS=”-L/data/billowqiu/hadoop-0.20.1+169.89/src/c++/libhdfs/lib -L/data/billowqiu/jdk1.6.0_25/jre/lib/amd64/server” –with-hadooppath=/data/billowqiu/hadoop-0.20.1+169.89 –enable-hdfs,实在不行直接修改Makefile也可以,先./configure –enable-hdfs,然后修改相应的src目录下面的Makefile,将以下字段修改如下:CPPFLAGS = -I/usr/local/lib/jdk1.6.0_23/include -I/usr/local/lib/jdk1.6.0_23/include/linux

上面的JDK路径以机器上面的实际路径为准。

貌似在静态链接boost会出现“undefined reference to `boost::system::generic_category()”,通过将将-lboost_system -lboost_filesystem放到链接库最后可以解决。

$ cd src $ g++ -Wall -O3 -L/usr/lib -o scribed store.o store_queue.o conf.o file.o conn_pool.o scribe_server.o network_dynamic_config.o dynamic_bucket_updater.o env_default.o -L/usr/local/lib -L/usr/local/lib -L/usr/local/lib -lfb303 -lthrift -lthriftnb -levent -lpthread libscribe.a libdynamicbucketupdater.a -lboost_system -lboost_filesystem

 

2.4拷贝hadoop-0.21.0/hdfs/src/c++/libhdfs/hdfs.h文件到scribe/src目录下,libhdfs其实就是通过JNI让C/C++调用HDFS接口,在hadoop-0.21.0/hdfs/src/c++/libhdfs目录下面执行如下操作
./configure –enable-shared JVM_ARCH=tune=k8 –prefix=pwd/nativelib
./make install
这时会在nativelib/lib下面生成5个文件,将其都拷贝到/usr/local/lib下面,执行ldconfig
2.5编译scribe,在scribe/src目录下面执行./make,即可生成scribed文件,
2.6按照scribe/examples目录下面的配置文件写个简单的支持HDFS的配置文件simple_hdfs_example.conf:
port=1463
max_msg_per_second=2000000
check_interval=1
max_queue_size=100000000
num_thrift_server_threads=2

DEFAULT - write all messages to hadoop

<store>
category=default
target_write_size=20480
type=file
fs_type=hdfs
file_path=hdfs://localhost:9000/scribedata
create_symlink=no
use_hostname_sub_directory=yes
base_filename=thisisoverwritten
max_size=1000000000
rotate_period=100s
add_newlines=1
</store>
<store>
category=qt
target_write_size=20480
type=file
fs_type=hdfs
file_path=hdfs://localhost:9000/scribedata
create_symlink=no
use_hostname_sub_directory=yes
base_filename=thisisoverwritten
max_size=1000000000
rotate_period=100s
add_newlines=1
</store>
2.7到处libhdfs库需要使用的jar路径即CLASSPATH,具体需要哪些不太清楚,
官方文档上建议将hadoop/lib目录下所有的库的加入,下面是我导出的:
export CLASSPATH=$CLASSPATH:/xxx/hadoop-0.21.0/hadoop-common-0.21.0.jar:/xxx/hadoop-0.21.0/hadoop-hdfs-0.21.0.jar:/xxx/hadoop-0.20.1+152/contrib/scribe-log4j/hadoop-0.20.1+152-scribe-log4j.jar:/xxx/hadoop-0.21.0/lib/commons-logging-1.1.1.jar:/xxx/hadoop-0.21.0/lib/commons-logging-api-1.1.jar:/xxx/hadoop-0.21.0/lib/core-3.1.1.jar:/xxx/hadoop-0.21.0/lib/log4j-1.2.15.jar
XXX你懂的。

2.8可以执行scribe了, ./scribed ../examples/simple_hdfs_example.conf
2.9通过发送工具发送日志到scribe,echo “Successful write data to HDFS,I am qiutao” | ./scribe_cat qt
3.10在hadoop目录下面执行bin/hadoop dfs -lsr /scribedata即可看到相应的数据生成了。
整个过程确实比较繁琐,尤其要注意2.4,一定要在自己机器上面编译libhdfs库,否则会出现一些莫名其妙的问题,
基本上都是抛出的java异常,在这个上面吃了不少亏,终于在自己机器上面写入HDFS了,一步步的也学到了不少东西,
留作以后备用。

thrift安装记录

发表于 2011-04-15 | 分类于 C++ , 系统编程

thrift依赖boost,libevent,openssl等

前两者都用默认安装即可,openssl安装时需要注意使用如下配置选项:

./config –prefix=/usr/local/ shared,否则会导致thrift在configure时出现依赖ssl或者crypto错误,折腾了半天,记录下。

另外在thrift配置时主要使用以下选项,以包含相应文件:

./configure CPPFLAGS=”-DHAVE_INTTYPES_H -DHAVE_NETINET_IN_H”

CentOS 5.5 下配置Hadoop 0.21单节点

发表于 2011-04-10 | 分类于 系统编程

主要参考Apache官方文档http://hadoop.apache.org/common/docs/r0.21.0/single_node_setup.html
唯一注意的是对于CentOS默认安装后的主机名问题,默认主机名为bogon,需要在/etc/hosts中加入如下一行:
127.0.0.1 bogon.localdomain bogon

运行bin/hadoop namenode -format后进行文件系统的格式化,
运行bin/start-all.sh启动所有节点,
可以通过jps查看进程:
[root@bogon hadoop-0.21.0]# jps
20532 JobTracker
20437 SecondaryNameNode
21589 Jps
20678 TaskTracker
20140 NameNode
20289 DataNode
[root@bogon hadoop-0.21.0]#
 

linux/unix系统中ldconfig命令使用详解

发表于 2011-04-09 | 分类于 转载

ldconfig是一个动态链接库管理命令。其目的是为了让动态链接库为系统所共享。

ldconfig命令的用途
主要是在默认搜寻目录(/lib和/usr/lib)以及动态库配置文件/etc/ld.so.conf内所列的目录下,
搜索出可共享的动态链接库(格式如lib.so),进而创建出动态装入程序(ld.so)所需的连接和缓存文件。
缓存文件默认为/etc/ld.so.cache,此文件保存已排好序的动态链接库名字列表。

ldconfig命令的使用时机
ldconfig通常在系统启动时运行,而当用户安装了一个新的动态链接库时,就需要手工运行这个命令。

ldconfig命令行用法如下:
ldconfig
[-v|–verbose] [-n] [-N] [-X] [-f CONF] [-C CACHE] [-r ROOT] [-l]
[-p|–print-cache] [-c FORMAT] [–format=FORMAT] [-V]
[-?|–help|–usage] path…

ldconfig可用的选项说明如下:

(1) -v或–verbose : 用此选项时,ldconfig将显示正在扫描的目录及搜索到的动态链接库,
还有它所创建的连接的名字。
(2) -n : 用此选项时,ldconfig仅扫描命令行指定的目录,不扫描默认目录(/lib,/usr/lib),
也不扫描配置文件/etc/ld.so.conf所列的目录。
(3) -N : 此选项指示ldconfig不重建缓存文件(/etc/ld.so.cache)。若未用-X选项,
ldconfig照常更新文件的连接。
(4) -X : 此选项指示ldconfig不更新文件的连接。若未用-N选项,则缓存文件正常更新。
(5) -f CONF : 此选项指定动态链接库的配置文件为CONF,系统默认为/etc/ld.so.conf。
(6) -C CACHE : 此选项指定生成的缓存文件为CACHE,系统默认的是/etc/ld.so.cache,
此文件存放已排好序的可共享的动态链接库的列表。
(7) -r ROOT : 此选项改变应用程序的根目录为ROOT(是调用chroot函数实现的)。
选择此项时,系统默认的配置文件
/etc/ld.so.conf,实际对应的为 ROOT/etc/ld.so.conf。如用-r /usr/zzz时,打开配置文件
/etc/ld.so.conf时,实际打开的是/usr/zzz/etc/ld.so.conf文件。用此选项,
可以大大增加动态链接库管理的灵活性。
(8) -l : 通常情况下,ldconfig搜索动态链接库时将自动建立动态链接库的连接。选择此项时,
将进入专家模式,需要手工设置连接,一般用户不用此项。
(9) -p或–print-cache : 此选项指示ldconfig打印出当前缓存文件所保存的所有共享库的名字。
(10) -c FORMAT 或 –format=FORMAT : 此选项用于指定缓存文件所使用的格式,
共有三种: ld(老格式),new(新格式)和compat(兼容格式,此为默认格式)。
(11) -V : 此选项打印出ldconfig的版本信息,而后退出。
(12) -? 或 –help 或 –usage : 这三个选项作用相同,都是让ldconfig打印出其帮助信息,而后退出。

日志收集服务器-scribe vs Flume

发表于 2011-04-09 | 分类于 系统编程

I read this post
about Cloudera’s Flume with much interest. Flume sounds
like a very interesting tool, not to mention that from Cloudera’s business
perspective it makes a lot of sense:

We’ve seen our customers have great success using Hadoop for processing their
data, but the question of how to get the data there to process in the first
place was often significantly more challenging.

Just in case you didn’t have the time to read about Flume yet, here’s a short
description from the GitHub project page
:

Flume is a distributed, reliable, and available service for efficiently
collecting, aggregating, and moving large amounts of log data. It has a simple
and flexible architecture based on streaming data flows. It is robust and fault
tolerant with tunable reliability mechanisms and many failover and recovery
mechanisms. The system is centrally managed and allows for intelligent dynamic
management. It uses a simple extensible data model that allows for online
analytic applications.

In a way this sounded a bit familiar. I thought I’ve seen something kind of
similar before: Scribe
:

Scribe is a server for aggregating streaming log data. It is designed to
scale to a very large number of nodes and be robust to network and node
failures. There is a scribe server running on every node in the system,
configured to aggregate messages and send them to a central scribe server (or
servers) in larger groups. If the central scribe server isn’t available the
local scribe server writes the messages to a file on local disk and sends them
when the central server recovers. The central scribe server(s) can write the
messages to the files that are their final destination, typically on an nfs
filer or a distributed filesystem, or send them to another layer of scribe
servers.

So my question is: how does Flume and Scribe compare
? What
are the major differences and what scenarios are good for one or the other?

If you have the answer to any of these questions, please drop a comment or send me an email
.

Update
: Looks like
I’ve failed to find this useful thread
, but thanks to this
comment

mistake is corrected:

1. Flume allows you to configure your Flume installation from a central
point, without having to ssh into every machine, update a configuration variable
and restart a daemon or two. You can start, stop, create, delete and reconfigure
logical nodes on any machine running Flume from any command line in your network
with the Flume jar available.

2. Flume also has centralised liveness monitoring. We’ve heard a couple of
stories of Scribe processes silently failing, but lying undiscovered for days
until the rest of the Scribe installation starts creaking under the increased
load. Flume allows you to see the health of all your logical nodes in one place
(note that this is different from machine liveness monitoring; often the machine
stays up while the process might fail).

3. Flume supports three distinct types of reliability guarantees, allowing
you to make tradeoffs between resource usage and reliability. In particular,
Flume supports fully ACKed reliability, with the guarantee that all events will
eventually make their way through the event flow.

4. Flume’s also really extensible - it’s really easy to write your own source
or sink and integrate most any system with Flume. If rolling your own is
impractical, it’s often very straightforward to have your applications output
events in a form that Flume can understand (Flume can run Unix processes, for
example, so if you can use shell script to get at your data, you’re golden).

— Henry Robinson

In the same thread, I’m reading about another tool
Chukwa

:

Chukwa is a Hadoop subproject devoted to large-scale log collection and
analysis. Chukwa is built on top of the Hadoop distributed filesystem (HDFS) and
MapReduce framework and inherits Hadoop’s scalability and robustness. Chukwa
also includes a exible and powerful toolkit for displaying monitoring and
analyzing results, in order to make the best use of this collected data.

Linux core文件生成及设置

发表于 2011-04-02 | 分类于 系统编程

首先是生成core文件,可以通过ulimit命令设置,但是要想在整个系统中生效光在shell里面设置是不行的,可以通过如下方法:

1编辑/root/.bash_profile文件,在其中加入:ulimit -S -c unlimited

需要注意的是不是每个版本的系统都有这个文件(Suse下面就是),如果没有可以手工创建

2重启系统或者执行:soruce /root/.bash_profile

 

core文件的设置:

1)/proc/sys/kernel/core_uses_pid可以控制core文件的文件名中是否添加pid作为扩展。文件内容为1,表示添加pid作为扩展名,生成的core文件格式为core.xxxx;为0则表示生成的core文件同一命名为core。

可通过以下命令修改此文件:

echo "1" > /proc/sys/kernel/core_uses_pid

2)proc/sys/kernel/core_pattern可以控制core文件保存位置和文件名格式。

可通过以下命令修改此文件:

echo "/corefile/core-%e-%p-%t" >

core_pattern,可以将core文件统一生成到/corefile目录下,产生的文件名为core-命令名-pid-时间戳

以下是参数列表:  

%p - insert pid into filename #添加pid  

%u - insert current uid into filename #添加当前uid  

%g - insert current gid into filename #添加当前gid  

%s - insert signal that caused the coredump into the filename #添加导致产生core的信号

%t - insert UNIX time that the coredump occurred into filename #添加core文件生成时的unix时间   

%h - insert hostname where the coredump happened into filename  #添加主机名

%e - insert coredumping executable name into filename

 

上面两个设置core输出的文件,好像只能这样往里面写入内容,我通过vi修改没成功,可能跟此文件在kernal目录下有关吧。

 

ISO C++ Committee Approves C++0x Final Draft

发表于 2011-03-28 | 分类于 C++

_"On the 25th, in Madrid, Spain, the ISO C++ committee approved a Final Draft International Standard (FDIS)
for the C++ programming language. This means that the proposed changes
to the new standard so far known as C++0x are now final. The
finalization of the standard itself, i.e. updating the working draft and
transmitting the final draft to ITTF
, is due to be completed during the summer, after which the standard is going to be published, to be known as C++ 2011
.
With the previous ISO C++ standard dating back to 2003 and C++0x having
been for over eight years in development, the implementation of the
standard is already well underway in the GCC
and Visual C++
compilers. Bjarne Stroustrup
, the creator of C++, maintains a handy FAQ
of the new standard."_

非客户区 窗口自绘

发表于 2011-03-26 | 分类于 VC

在进行界面自绘时,一般都需要处理系统的边框,也就是所谓的非客户区,以前做界面时经常为这个问题发愁,那会摸索出了一种简单的方法:

让窗口根本就不存在非客户区,所有的绘制都在客户区里面,但是只会引入另外一个问题,如何实现窗口需要通过鼠标拉伸变化大小?CodeProject上面有个例子可供参考http://www.codeproject.com/KB/MFC/CustomWindow.aspx,里面的处理稍显麻烦,可以自己适当优化下。


另外一种方法是在研究QQ2009之后的界面时发现的,通过SPY可以看出其可拉伸的窗口属性为:

WS_EX_LEFT|WS_EX_LTRREADING|WS_EX_RIGHTSCROLLBAR|WS_EX_OVERLAPPEDWINDOW

和

WS_POPUP|WS_VISIBLE|WS_CLIPCHILDREN|WS_CLIPSIBLINGS|WS_SYSMENU|WS_THICKFRAME|WS_MAXIMIZEBOX|WS_MINIMIZEBOX

这里主要是有一个|WS_THICKFRAME,一般情况下有这个属性的窗口肯定会带有系统边框,即带有非客户区,通过SPY++也可以看出其客户矩形的左边不是从0开始的,且是可以通过鼠标拉伸大小的,但是QQ的窗口通过SPY查看会发现其客户区和窗口一样大小,这是为什么呢,显然其通过某种手段将非客户区干掉了,这时我们肯定会想到WM_NCCALCSIZE消息,没错就是这个消息,我们直接在接收到这个消息的地方返回0即可,在MSDN2008里面对这个消息没有说明可以直接返回0,但是在在线版MSDN文档里面有下面这段:

 

Starting with Windows Vista, removing the standard frame by simply returning 0 when the wParam is TRUE does not affect frames that are extended into the client area using the DwmExtendFrameIntoClientArea.aspx) function. Only the standard frame will be removed.

 

 

剩下的只需在WM_NCHITTEST消息里面处理即可了。

具体实现可以参见下面这个例子:http://download.csdn.net/source/3130708

 

PS:写此文主要是最近看到网上很多开源的DirectUI库,想起当初刚工作那会的界面开发工作,确实学到了不少东西,在研究某个库时发现了WM_NCCALCSIZE直接返回0即可去掉系统非客户区,有趣的是在线版MSDN对这个有说明,而我的本地版没有,让我走了不少弯路啊

1…567…16

billowqiu

157 日志
33 分类
10 标签
GitHub E-Mail
© 2020 billowqiu
由 Hexo 强力驱动
|
主题 — NexT.Pisces v5.1.3