<大数据架构>有关大数据集群跑在虚拟机上的一些测试
目录:
测试原因
当告诉我要去虚拟化环境下测试大数据集群性能的时候我是一脸懵逼,虚拟化与大数据本来就是针对硬件进行充分满足业务需求的两个相反方向,怎么会有这种要求,但是我也非常感兴趣在虚拟化的大数据集群中会和相同物理服务器有多大的性能损耗。
本次测试环境
第一次测试环境
本环境是建立在新华三的云平台上虚拟化产生
- 提供了14台虚拟主机,内存和cpu由4台物理机虚拟而来,磁盘资源池为两个,每个资源池为两块NL-SAS做raid1,四块SAS做raid5,每个资源池提供给7台机器做磁盘。
- CentOS6.5操作系统
- CDH5.5版本,安装组件HDFS,Yarn,zookeeper,Hbase,hive,kafka
- 内存:128G
- 物理CPU:2核
- 逻辑CPU:8核
- 磁盘:500G
- 网络:万兆网络
- 节点情况:主节点
*
1,监控节点*
1,数据节点*
12第二次测试环境
本环境是建立在新华三的云平台上虚拟化产生
- 提供了14台虚拟主机,内存和cpu由4台物理机虚拟而来,磁盘资源池为14个,每个资源池由四块SAS做raid5组成,每个资源池划分一部分存储给一台机器做磁盘。
- CentOS6.5操作系统
- CDH5.5版本,安装组件HDFS,Yarn,zookeeper,Hbase,hive,kafka
- 内存:128G
- 物理CPU:2核
- 逻辑CPU:8核
- 磁盘:1T
- 网络:万兆网络
- 节点情况:主节点
*
1,监控节点*
1,数据节点*
12
H3C云平台
测试结果
磁盘写入速度方面
测试用过dd分别对不同bs,1024,2048,4096和8192进行测试
测试命令 | 测试结果 |
---|---|
bs=1024 count=1000000 | 445 MB/s |
bs=2048 count=500000 | 624 MB/s |
bs=4096 count=250000 | 840 MB/s |
bs=8192 count=125000 | 883 MB/s |
第一次和第二次结果相似。
网络测试
测试通过iperf进行测试
网络测试,其他从节点到主节点网络情况
节点名称 | 测试结果 |
---|---|
master2 | 7.83Gbits/sec |
agent1 | 9.44Gbits/sec |
agent2 | 9.71Gbits/sec |
agent3 | 5.76Gbits/sec |
agent4 | 5.28Gbits/sec |
agent5 | 6.91Gbits/sec |
agent6 | 6.69Gbits/sec |
agent7 | 7.76Gbits/sec |
agent8 | 7.72Gbits/sec |
agent9 | 7.82Gbits/sec |
agent10 | 7.56Gbits/sec |
agent11 | 7.13Gbits/sec |
agent12 | 7.56Gbits/sec |
平均 | 7.47Gbits/sec |
网络测试,主节点到其他从节点网络情况
节点名称 | 测试结果 |
---|---|
master2 | 8.26Gbits/sec |
agent1 | 8.86Gbits/sec |
agent2 | 9.53Gbits/sec |
agent3 | 7.79Gbits/sec |
agent4 | 7.16Gbits/sec |
agent5 | 5.8Gbits/sec |
agent6 | 8.22Gbits/sec |
agent7 | 8.43Gbits/sec |
agent8 | 6.13Gbits/sec |
agent9 | 7.17Gbits/sec |
agent10 | 7.78Gbits/sec |
agent11 | 7.95Gbits/sec |
agent12 | 8.85Gbits/sec |
平均 | 7.84Gbits/sec |
第一次和第二次结果相似。
DFSTestIO测试
通过hadoop-mapreduce-client-jobclient.jar中TestDFSIO方法进行测试。
第一次测试环境
测试命令1
time hadoop jar /opt/cloudera/parcels/CDH-5.5.4-1.cdh5.5.4.p0.9/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient.jar TestDFSIO -write -nrFiles 1 -fileSize 240GB
测试结果1
持续了1小时以上没有运行完,被kill掉了。
测试命令2
time hadoop jar /opt/cloudera/parcels/CDH-5.5.4-1.cdh5.5.4.p0.9/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient.jar TestDFSIO -write -nrFiles 12 -fileSize 10240
测试结果2
16/12/20 16:18:58 INFO fs.TestDFSIO: ----- TestDFSIO ----- : write
16/12/20 16:18:58 INFO fs.TestDFSIO: Date & time: Tue Dec 20 16:18:58 CST 2016
16/12/20 16:18:58 INFO fs.TestDFSIO: Number of files: 12
16/12/20 16:18:58 INFO fs.TestDFSIO: Total MBytes processed: 122880.0
16/12/20 16:18:58 INFO fs.TestDFSIO: Throughput mb/sec: 8.97392042013839
16/12/20 16:18:58 INFO fs.TestDFSIO: Average IO rate mb/sec: 10.6062650680542
16/12/20 16:18:58 INFO fs.TestDFSIO: IO rate std deviation: 4.99261083128042
16/12/20 16:18:58 INFO fs.TestDFSIO: Test exec time sec: 1906.565
16/12/20 16:18:58 INFO fs.TestDFSIO:
real 31m49.479s
user 0m13.032s
sys 0m1.251s
测试命令3
time hadoop jar /opt/cloudera/parcels/CDH-5.5.4-1.cdh5.5.4.p0.9/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient.jar TestDFSIO -write -nrFiles 48 -fileSize 1024
测试结果3
16/12/20 16:34:04 INFO fs.TestDFSIO: ----- TestDFSIO ----- : write
16/12/20 16:34:04 INFO fs.TestDFSIO: Date & time: Tue Dec 20 16:34:04 CST 2016
16/12/20 16:34:04 INFO fs.TestDFSIO: Number of files: 48
16/12/20 16:34:04 INFO fs.TestDFSIO: Total MBytes processed: 49152.0
16/12/20 16:34:04 INFO fs.TestDFSIO: Throughput mb/sec: 51.0403394381949
16/12/20 16:34:04 INFO fs.TestDFSIO: Average IO rate mb/sec: 51.61402893066406
16/12/20 16:34:04 INFO fs.TestDFSIO: IO rate std deviation: 5.727437558112107
16/12/20 16:34:04 INFO fs.TestDFSIO: Test exec time sec: 44.828
16/12/20 16:34:04 INFO fs.TestDFSIO:
real 0m50.895s
user 0m6.262s
sys 0m0.457s
测试命令4
time hadoop jar /opt/cloudera/parcels/CDH-5.5.4-1.cdh5.5.4.p0.9/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient.jar TestDFSIO -write -nrFiles 48 -fileSize 4096
测试结果4
16/12/20 17:32:41 INFO fs.TestDFSIO: ----- TestDFSIO ----- : write
16/12/20 17:32:41 INFO fs.TestDFSIO: Date & time: Tue Dec 20 17:32:41 CST 2016
16/12/20 17:32:41 INFO fs.TestDFSIO: Number of files: 48
16/12/20 17:32:41 INFO fs.TestDFSIO: Total MBytes processed: 196608.0
16/12/20 17:32:41 INFO fs.TestDFSIO: Throughput mb/sec: 2.378699777207413
16/12/20 17:32:41 INFO fs.TestDFSIO: Average IO rate mb/sec: 2.7269303798675537
16/12/20 17:32:41 INFO fs.TestDFSIO: IO rate std deviation: 1.397317044124283
16/12/20 17:32:41 INFO fs.TestDFSIO: Test exec time sec: 2993.184
16/12/20 17:32:41 INFO fs.TestDFSIO:
real 54m47.815s
user 0m16.408s
sys 0m2.043s
第一次测试效果非常差,我提出了磁盘方面有问题,并详细问了一下虚拟环境下磁盘的情况,进而有了第二次测试。
第二次测试环境
测试命令1
time hadoop jar /opt/cloudera/parcels/CDH-5.5.4-1.cdh5.5.4.p0.9/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient.jar TestDFSIO -write -nrFiles 1 -fileSize 240GB
测试结果1
----- TestDFSIO ----- : write
Date & time: Wed Dec 21 23:59:32 CST 2016
Number of files: 1
Total MBytes processed: 245760.0
Throughput mb/sec: 118.75194489167558
Average IO rate mb/sec: 118.75194549560547
IO rate std deviation: 0.012038635802242013
Test exec time sec: 2091.628
测试命令2
time hadoop jar /opt/cloudera/parcels/CDH-5.5.4-1.cdh5.5.4.p0.9/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient.jar TestDFSIO -write -nrFiles 12 -fileSize 10240
测试结果2
----- TestDFSIO ----- : write
Date & time: Wed Dec 21 23:16:23 CST 2016
Number of files: 12
Total MBytes processed: 122880.0
Throughput mb/sec: 60.30615384841895
Average IO rate mb/sec: 61.08230972290039
IO rate std deviation: 6.9769796872495915
Test exec time sec: 230.174
测试命令3
time hadoop jar /opt/cloudera/parcels/CDH-5.5.4-1.cdh5.5.4.p0.9/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient.jar TestDFSIO -write -nrFiles 48 -fileSize 1024
测试结果3
----- TestDFSIO ----- : write
Date & time: Wed Dec 21 23:10:06 CST 2016
Number of files: 48
Total MBytes processed: 49152.0
Throughput mb/sec: 47.43742182553776
Average IO rate mb/sec: 48.093387603759766
IO rate std deviation: 6.128223358274526
Test exec time sec: 45.674
测试命令4
time hadoop jar /opt/cloudera/parcels/CDH-5.5.4-1.cdh5.5.4.p0.9/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient.jar TestDFSIO -write -nrFiles 48 -fileSize 4096
测试结果4
----- TestDFSIO ----- : write
Date & time: Wed Dec 21 23:05:47 CST 2016
Number of files: 48
Total MBytes processed: 196608.0
Throughput mb/sec: 12.254119656796394
Average IO rate mb/sec: 12.745087623596191
IO rate std deviation: 2.7699301787898576
Test exec time sec: 459.636
HBase YCSB测试
针对insert数据
[root@master1 bin]# ./ycsb load hbase10 -P ../workloads/workloada -cp /etc/hbase/conf -p table=usertable -p columnfamily=family -p recordcount=100000 -threads 10
[OVERALL], RunTime(ms), 19371.0
[OVERALL], Throughput(ops/sec), 5162.3560993237315
[TOTAL_GCS_PS_Scavenge], Count, 6.0
[TOTAL_GC_TIME_PS_Scavenge], Time(ms), 66.0
[TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.34071550255536626
[TOTAL_GCS_PS_MarkSweep], Count, 0.0
[TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 0.0
[TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.0
[TOTAL_GCs], Count, 6.0
[TOTAL_GC_TIME], Time(ms), 66.0
[TOTAL_GC_TIME_%], Time(%), 0.34071550255536626
[CLEANUP], Operations, 20.0
[CLEANUP], AverageLatency(us), 6220.45
[CLEANUP], MinLatency(us), 2.0
[CLEANUP], MaxLatency(us), 123711.0
[CLEANUP], 95thPercentileLatency(us), 505.0
[CLEANUP], 99thPercentileLatency(us), 123711.0
[INSERT], Operations, 100000.0
[INSERT], AverageLatency(us), 1806.23058
[INSERT], MinLatency(us), 786.0
[INSERT], MaxLatency(us), 275199.0
[INSERT], 95thPercentileLatency(us), 3123.0
[INSERT], 99thPercentileLatency(us), 5175.0
[INSERT], Return=OK, 100000
针对read和reload数据
[root@master1 bin]# ./ycsb run hbase10 -P ../workloads/workloada -cp /etc/hbase/conf -p measurementtype=timeseries -p columnfamily=family -p timeseries.granularity=2000 -p threads=10
[OVERALL], RunTime(ms), 4266.0
[OVERALL], Throughput(ops/sec), 234.4116268166901
[TOTAL_GCS_PS_Scavenge], Count, 1.0
[TOTAL_GC_TIME_PS_Scavenge], Time(ms), 14.0
[TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.32817627754336615
[TOTAL_GCS_PS_MarkSweep], Count, 0.0
[TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 0.0
[TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.0
[TOTAL_GCs], Count, 1.0
[TOTAL_GC_TIME], Time(ms), 14.0
[TOTAL_GC_TIME_%], Time(%), 0.32817627754336615
[CLEANUP], Operations, 2
[CLEANUP], AverageLatency(us), 62235.5
[CLEANUP], MinLatency(us), 18
[CLEANUP], MaxLatency(us), 124453
[CLEANUP], 0, 62235.5
[READ], Operations, 535
[READ], AverageLatency(us), 2518.143925233645
[READ], MinLatency(us), 1630
[READ], MaxLatency(us), 49110
[READ], Return=OK, 535
[READ], 0, 2606.058666666667
[READ], 2000, 2312.09375
[UPDATE], Operations, 465
[UPDATE], AverageLatency(us), 3557.9505376344086
[UPDATE], MinLatency(us), 2429
[UPDATE], MaxLatency(us), 155603
[UPDATE], Return=OK, 465
[UPDATE], 0, 3747.092105263158
[UPDATE], 2000, 3200.813664596273
第一次和第二次的测试结果相似。
hive测试
通过生成books表,customers表和transactions表各4G,在hive中建表,load数据然后进行查询分析。
扫描查询
SELECT COUNT(*) FROM customers WHERE name = 'Asher MATTHEWS';
测试耗时33.648s。
聚合查询
SELECT category,count(*) cnt FROM books GROUP BY category ORDER BY cnt DESC LIMIT 10;
测试耗时53.289s。
两个表之间的联接查询
SELECT tmp.book_category, ROUND(tmp.revenue, 2) AS revenue
FROM (
SELECT books.category AS book_category, SUM(books.price * transactions.quantity) AS revenue
FROM books JOIN transactions ON (
transactions.book_id = books.id
AND YEAR(transactions.transaction_date) BETWEEN 2008 AND 2010
)
GROUP BY books.category
) tmp
ORDER BY revenue DESC LIMIT 10;
测试耗时106.791s。
三个表之间的联接查询
SELECT tmp.book_category, ROUND(tmp.revenue, 2) AS revenue
FROM (
SELECT books.category AS book_category, SUM(books.price * transactions.quantity) AS revenue
FROM books
JOIN transactions ON (
transactions.book_id = books.id
)
JOIN customers ON (
transactions.customer_id = customers.id
AND customers.state IN ('WA', 'CA', 'NY')
)
GROUP BY books.category
) tmp
ORDER BY revenue DESC LIMIT 10;
测试耗时214.394s。
两表之间的关联查询
SELECT tmp.book_category, ROUND(tmp.revenue, 2) AS revenue
FROM (
SELECT books.category AS book_category, SUM(books.price * transactions.quantity) AS revenue
FROM books JOIN transactions ON (
transactions.book_id = books.id
AND YEAR(transactions.transaction_date) BETWEEN 2008 AND 2010
)
GROUP BY books.category
) tmp
ORDER BY revenue DESC LIMIT 10;
测试耗时101.748s。
第一次和第二次的测试结果相似。
kafka测试
kafka版本为1.2.0
创建消息队列
[root@master1 bin]# ./kafka-topics.sh --create --zookeeper agent1:2181,agent2:2181,agent3:2181 --replication-factor 3 --partitions 1 --topic test
Created topic "test".
产生消息队列,产生了100000条,每条大小为300kB。
[root@master1 bin]# ./kafka-producer-perf-test.sh --topic test --num-records 100000 --record-size 300000 --throughput -1 --producer-props bootstrap.servers=127.0.0.1:9092
2372 records sent, 474.3 records/sec (135.70 MB/sec), 218.0 ms avg latency, 277.0 max latency.
2949 records sent, 589.8 records/sec (168.74 MB/sec), 189.8 ms avg latency, 219.0 max latency.
2991 records sent, 598.2 records/sec (171.15 MB/sec), 187.2 ms avg latency, 258.0 max latency.
3012 records sent, 602.4 records/sec (172.35 MB/sec), 185.6 ms avg latency, 208.0 max latency.
......
597 records sent, 115.2 records/sec (32.97 MB/sec), 898.9 ms avg latency, 1291.0 max latency.
642 records sent, 128.3 records/sec (36.72 MB/sec), 926.3 ms avg latency, 1277.0 max latency.
503 records sent, 100.3 records/sec (28.68 MB/sec), 1138.0 ms avg latency, 1359.0 max latency.
100000 records sent, 135.132579 records/sec (38.66 MB/sec), 827.39 ms avg latency, 5633.00 ms max latency, 788 ms 50th, 1703 ms 95th, 2601 ms 99th, 4675 ms 99.9th.
产生了100000个消息,每秒产生135.132579个消息,每秒产生的消息总大小为38.66 MB,平均延迟时间为827.39 ms,最大延迟时间为827.39 ms。
消费消息队列
[root@master1 bin]# ./kafka-consumer-perf-test.sh --topic test --broker-list agent1,agent2,agent3,agent4,agent5,agent6,agent7,agent8,agent9,agent10,agent11,agent12 --messages 100000 --zookeeper agent1,agent2,agent3 --threads 12
start.time, end.time, data.consumed.in.MB, MB.sec, data.consumed.in.nMsg, nMsg.sec
[2016-12-21 14:55:39,488] WARN No broker partitions consumed by consumer thread perf-consumer-38768_master1-1482303333792-8983ac74-1 for topic test (kafka.consumer.RangeAssignor)
[2016-12-21 14:55:39,489] WARN No broker partitions consumed by consumer thread perf-consumer-38768_master1-1482303333792-8983ac74-10 for topic test (kafka.consumer.RangeAssignor)
[2016-12-21 14:55:39,489] WARN No broker partitions consumed by consumer thread perf-consumer-38768_master1-1482303333792-8983ac74-11 for topic test (kafka.consumer.RangeAssignor)
[2016-12-21 14:55:39,489] WARN No broker partitions consumed by consumer thread perf-consumer-38768_master1-1482303333792-8983ac74-2 for topic test (kafka.consumer.RangeAssignor)
[2016-12-21 14:55:39,489] WARN No broker partitions consumed by consumer thread perf-consumer-38768_master1-1482303333792-8983ac74-3 for topic test (kafka.consumer.RangeAssignor)
[2016-12-21 14:55:39,489] WARN No broker partitions consumed by consumer thread perf-consumer-38768_master1-1482303333792-8983ac74-4 for topic test (kafka.consumer.RangeAssignor)
[2016-12-21 14:55:39,490] WARN No broker partitions consumed by consumer thread perf-consumer-38768_master1-1482303333792-8983ac74-5 for topic test (kafka.consumer.RangeAssignor)
[2016-12-21 14:55:39,490] WARN No broker partitions consumed by consumer thread perf-consumer-38768_master1-1482303333792-8983ac74-6 for topic test (kafka.consumer.RangeAssignor)
[2016-12-21 14:55:39,490] WARN No broker partitions consumed by consumer thread perf-consumer-38768_master1-1482303333792-8983ac74-7 for topic test (kafka.consumer.RangeAssignor)
[2016-12-21 14:55:39,490] WARN No broker partitions consumed by consumer thread perf-consumer-38768_master1-1482303333792-8983ac74-8 for topic test (kafka.consumer.RangeAssignor)
[2016-12-21 14:55:39,490] WARN No broker partitions consumed by consumer thread perf-consumer-38768_master1-1482303333792-8983ac74-9 for topic test (kafka.consumer.RangeAssignor)
2016-12-21 14:55:54:967, 2016-12-21 14:57:32:359, 28610.2295, 293.7637, 100000, 1026.7784
2016-12-21 14:55:54开始,2016-12-21 14:57:32结束,98s的消费时间(计算值) ,消息队列的大小总数为28610.2295MB,每秒消费的消息大小为293.7637MB, 消息队列的消息总数为100000条,每秒消费的消息个数为1026.7784个。
第一次和第二次的测试结果相似。
另外提供10k大小数据,每秒消费约为3400个,1M大小数据,每秒消费约为100个。
对于这次测试的总结
此次测试效果完全低于正常物理机测试效果,在虚拟机上内存和cpu满足,磁盘写入速度无法和挂载多块磁盘的写入速度进行比较是这次测试发现的主要问题。另外测试效果是由各种硬件属性,配置参数属性做乘法而得到的,虚拟环境下相同配置势必会比物理机测试效果要差,每个差一点,做乘法的结果差距就会很大。
个人对大数据和虚拟化的认识(仅考虑当前测试场景)
对于大数据
- 产生原因:随着互联网的发展,在生产中产生了大量的数据,这些数据大到传统方式无法存储和计算,或者是企业无法负担这么高的成本,进而产生了大数据,用于对日益增长的数据进行存储和处理。
- 原理:集群通过主节点进行调度,从节点进行数据读写,每台节点的是对多个磁盘进行同时并发写入。
- 满足场景:对大量数据存储和计算,对硬件资源要求非常高。
磁盘方面架构:磁盘直接挂载。
对于虚拟化(个人对虚拟化了解可能并不是太多,可能会存在一些误区)
产生原因:对于一些普通的业务场景,在硬件配置成倍提升的,过高的硬件设备造成硬件资源的浪费,而不同服务对系统参数要求不一样,进而在硬件和服务中间,多了一层虚拟层虚拟多个服务节点进行不同服务。
- 原理:传统情况是硬件与操作系统中间通过硬件接口进行信号传递;虚拟化是在硬件与操作系统中间添加了一层虚拟层,通过硬件接口接入虚拟层进行虚拟层再提供类似硬件接口对接操作系统。
- 满足场景:多应用部署,应用对操作系统参数不相同,对硬件资源需求不高或者没那么苛刻。
- 磁盘方面架构:磁盘通过做Raid提高性能,raid做成资源池,进行资源池中的存储进行挂载。
对于大数据运行在虚拟化环境下的问题
- 大数据集群的瓶颈是在网络和磁盘上,虚拟化的网络和磁盘比未虚拟化的网络和磁盘慢多少?
首先在网络过程中,虚拟环境下万兆网平均值为7.6Gbits/sec,在以前我们实体机万兆网中,这个值一般为8.5Gbits/sec以上,甚至接近9.5Gbits/sec。
再说磁盘,询问了一下做过虚拟化的运维人员,磁盘做虚拟化后写入速率为原来的90%,最好能到95%,但是也有原性能70%~80%的,根据虚拟化产品或者虚拟化配置决定,写入延迟一般也会变大,但是大多少也和虚拟化产品或者虚拟配置决定。
- 在这种场景下,那种业务场景会大幅度收到影响?
首先,现有架构的情况下,磁盘IOPS和写延迟是着整个集群的瓶颈,任何大数据量的写入,或者运算结果过大(中间结果过大)的操作都会出现任务长时间无法完成的情况。
- 虚拟化环境对实体机环境的优化项是否支持?
例如网络方面的双网卡做bond是否支持,网络mtu参数调整是否支持(起码在测试架构下无法进行该测试,mtu值调大会影响其他测试的正常运行),还有一些其他硬件优化虚拟化环境能否很好的支持。