多多色-多人伦交性欧美在线观看-多人伦精品一区二区三区视频-多色视频-免费黄色视屏网站-免费黄色在线

國內最全IT社區平臺 聯系我們 | 收藏本站
阿里云優惠2
您當前位置:首頁 > php開源 > 綜合技術 > MongoDB: 9. Sharding (2)

MongoDB: 9. Sharding (2)

來源:程序員人生   發布時間:2014-03-08 11:51:03 閱讀次數:4009次
MongoDB Auto-Sharding 解決了海量存儲和動態擴容的問題,但離實際生產環境所需的高可靠(high reliability)、高可用(high availability)還有些距離。

解決方案:
Shard: 使用 Replica Sets,確保每個數據節點都具有備份、自動容錯轉移、自動恢復能力。
Config: 使用 3 個配置服務器,確保元數據完整性(two-phase commit)。
Route: 配合 LVS,實現負載平衡,提高接入性能(high performance)。
以下我們配置一個 Replica Sets + Sharding 測試環境。
裝個配置過程建議都用 IP 地址,以免出錯。

(1) 首先建好所有的數據庫目錄。

$ sudo mkdir -p /var/mongodb/10001
$ sudo mkdir -p /var/mongodb/10002
$ sudo mkdir -p /var/mongodb/10003

$ sudo mkdir -p /var/mongodb/10011
$ sudo mkdir -p /var/mongodb/10012
$ sudo mkdir -p /var/mongodb/10013

$ sudo mkdir -p /var/mongodb/config1
$ sudo mkdir -p /var/mongodb/config2
$ sudo mkdir -p /var/mongodb/config3
(2) 配置 Shard Replica Sets。

$ sudo ./mongod --shardsvr --fork --logpath /dev/null --dbpath /var/mongodb/10001 --port 10001 --nohttpinterface --replSet set1
forked process: 4974
all output going to: /dev/null

$ sudo ./mongod --shardsvr --fork --logpath /dev/null --dbpath /var/mongodb/10002 --port 10002 --nohttpinterface --replSet set1
forked process: 4988
all output going to: /dev/null

$ sudo ./mongod --shardsvr --fork --logpath /dev/null --dbpath /var/mongodb/10003 --port 10003 --nohttpinterface --replSet set1
forked process: 5000
all output going to: /dev/null$ ./mongo --port 10001

MongoDB shell version: 1.6.2
connecting to: 127.0.0.1:10001/test

> cfg = { _id:'set1', members:[
... { _id:0, host:'192.168.1.202:10001' },
... { _id:1, host:'192.168.1.202:10002' },
... { _id:2, host:'192.168.1.202:10003' }
... ]};

> rs.initiate(cfg)
{
"info" : "Config now saved locally. Should come online in about a minute.",
"ok" : 1
}

> rs.status()
{
"set" : "set1",
"date" : "Tue Sep 07 2010 10:25:28 GMT+0800 (CST)",
"myState" : 5,
"members" : [
{
"_id" : 0,
"name" : "yuhen-server64:10001",
"health" : 1,
"state" : 5,
"self" : true
},
{
"_id" : 1,
"name" : "192.168.1.202:10002",
"health" : -1,
"state" : 6,
"uptime" : 0,
"lastHeartbeat" : "Thu Jan 01 1970 08:00:00 GMT+0800 (CST)"
},
{
"_id" : 2,
"name" : "192.168.1.202:10003",
"health" : -1,
"state" : 6,
"uptime" : 0,
"lastHeartbeat" : "Thu Jan 01 1970 08:00:00 GMT+0800 (CST)"
}
],
"ok" : 1
}
配置第二組 Shard Replica Sets。

$ sudo ./mongod --shardsvr --fork --logpath /dev/null --dbpath /var/mongodb/10011 --port 10011 --nohttpinterface --replSet set2
forked process: 5086
all output going to: /dev/null

$ sudo ./mongod --shardsvr --fork --logpath /dev/null --dbpath /var/mongodb/10012 --port 10012 --nohttpinterface --replSet set2
forked process: 5098
all output going to: /dev/null

$ sudo ./mongod --shardsvr --fork --logpath /dev/null --dbpath /var/mongodb/10013 --port 10013 --nohttpinterface --replSet set2
forked process: 5112
all output going to: /dev/null$ ./mongo --port 10011

MongoDB shell version: 1.6.2
connecting to: 127.0.0.1:10011/test

> cfg = { _id:'set2', members:[
... { _id:0, host:'192.168.1.202:10011' },
... { _id:1, host:'192.168.1.202:10012' },
... { _id:2, host:'192.168.1.202:10013' }
... ]}

> rs.initiate(cfg)
{
"info" : "Config now saved locally. Should come online in about a minute.",
"ok" : 1
}

> rs.status()
{
"set" : "set2",
"date" : "Tue Sep 07 2010 10:28:37 GMT+0800 (CST)",
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "yuhen-server64:10011",
"health" : 1,
"state" : 1,
"self" : true
},
{
"_id" : 1,
"name" : "192.168.1.202:10012",
"health" : 0,
"state" : 6,
"uptime" : 0,
"lastHeartbeat" : "Tue Sep 07 2010 10:28:36 GMT+0800 (CST)",
"errmsg" : "still initializing"
},
{
"_id" : 2,
"name" : "192.168.1.202:10013",
"health" : 1,
"state" : 5,
"uptime" : 1,
"lastHeartbeat" : "Tue Sep 07 2010 10:28:36 GMT+0800 (CST)",
"errmsg" : "."
}
],
"ok" : 1
}
(3) 啟動 Config Server。

我們可以只使用 1 個 Config Server,但 3 個理論上更有保障性。

Chunk information is the main data stored by the config servers. Each config server has a complete copy of all chunk information. A two-phase
commit is used to ensure the consistency of the configuration data among the config servers.

If any of the config servers is down, the cluster's meta-data goes read only. However, even in such a failure state, the MongoDB cluster can still
be read from and written to.
注意!這個不是 Replica Sets,不需要 --replSet 參數。

$ sudo ./mongod --configsvr --fork --logpath /dev/null --dbpath /var/mongodb/config1 --port 20000 --nohttpinterface
forked process: 5177
all output going to: /dev/null

$ sudo ./mongod --configsvr --fork --logpath /dev/null --dbpath /var/mongodb/config2 --port 20001 --nohttpinterface
forked process: 5186
all output going to: /dev/null

$ sudo ./mongod --configsvr --fork --logpath /dev/null --dbpath /var/mongodb/config3 --port 20002 --nohttpinterface
forked process: 5195
all output going to: /dev/null$ ps aux | grep configsvr | grep -v grep
root ./mongod --configsvr --fork --logpath /dev/null --dbpath /var/mongodb/config1 --port 20000 --nohttpinterface
root ./mongod --configsvr --fork --logpath /dev/null --dbpath /var/mongodb/config2 --port 20001 --nohttpinterface
root ./mongod --configsvr --fork --logpath /dev/null --dbpath /var/mongodb/config3 --port 20002 --nohttpinterface
(4) 啟動 Route Server。

注意 --configdb 參數。

$ sudo ./mongos --fork --logpath /dev/null --configdb "192.168.1.202:20000,192.168.1.202:20001,192.168.1.202:20002"
forked process: 5209
all output going to: /dev/null$ ps aux | grep mongos | grep -v grep
root ./mongos --fork --logpath /dev/null --configdb 192.168.1.202:20000,192.168.1.202:20001,192.168.1.202:20002
(5) 開始配置 Sharding。

注意 addshard 添加 Replica Sets 的格式。

$ ./mongo

MongoDB shell version: 1.6.2
connecting to: test

> use admin
switched to db admin

> db.runCommand({ addshard:'set1/192.168.1.202:10001,192.168.1.202:10002,192.168.1.202:10003' })
{ "shardAdded" : "set1", "ok" : 1 }

> db.runCommand({ addshard:'set2/192.168.1.202:10011,192.168.1.202:10012,192.168.1.202:10013' })
{ "shardAdded" : "set2", "ok" : 1 }

> db.runCommand({ enablesharding:'test' })
{ "ok" : 1 }

> db.runCommand({ shardcollection:'test.data', key:{_id:1} })
{ "collectionsharded" : "test.data", "ok" : 1 }

> db.runCommand({ listshards:1 })
{
"shards" : [
{
"_id" : "set1",
"host" : "set1/192.168.1.202:10001,192.168.1.202:10002,192.168.1.202:10003"
},
{
"_id" : "set2",
"host" : "set2/192.168.1.202:10011,192.168.1.202:10012,192.168.1.202:10013"
}
],
"ok" : 1
}

> printShardingStatus()

--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{
"_id" : "set1",
"host" : "set1/192.168.1.202:10001,192.168.1.202:10002,192.168.1.202:10003"
}
{
"_id" : "set2",
"host" : "set2/192.168.1.202:10011,192.168.1.202:10012,192.168.1.202:10013"
}
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : true, "primary" : "set1" }
test.data chunks:
{ "_id" : { $minKey : 1 } } -->> { "_id" : { $maxKey : 1 } } on : set1 { "t" : 1000, "i" : 0 }

---- 配置結束 ------

OK! 基本搞定,可以測試一下。

> use test
switched to db test

> db.data.insert({name:1})
> db.data.insert({name:2})
> db.data.insert({name:3})

> db.data.find()
{ "_id" : ObjectId("4c85a6d9ce93b9b1b302ebe7"), "name" : 1 }
{ "_id" : ObjectId("4c85a6dbce93b9b1b302ebe8"), "name" : 2 }
{ "_id" : ObjectId("4c85a6ddce93b9b1b302ebe9"), "name" : 3 }
至于為 Route 配置 LVS,可參考 《LVS 負載均衡》。
生活不易,碼農辛苦
如果您覺得本網站對您的學習有所幫助,可以手機掃描二維碼進行捐贈
程序員人生
------分隔線----------------------------
分享到:
------分隔線----------------------------
關閉
程序員人生
主站蜘蛛池模板: 亚洲精品一区二区三区婷婷月 | 亚洲欧美另类日韩 | 欧美性xxxx巨大黑人猛 | 噜噜噜噜私人影院av线观看 | 亚洲欧洲日韩国产一区二区三区 | 成人在线手机视频 | 久久国内视频 | 国模一区二区三区视频一 | 亚洲伊人久久大香线蕉啊 | 超清高清欧美videos | 最近更新中文字幕免费版 | 毛片三级在线观看 | 日韩欧美一级a毛片欧美一级 | 日本网络视频www色高清免费 | 亚洲视频免费播放 | 亚洲精品自拍区在线观看 | 中文字幕一区久久久久 | 一二三四日本手机高清视频 | 国产精品视频第一页 | 国产精品成人扳一级aa毛片 | 久久成人精品 | 亚洲一区影院 | 国产不卡高清在线观看视频 | 女人一级片 | 视频一区二区不卡 | 久久久精品免费 | 自拍自录videosfree自拍自录 | 亚洲h视频 | xxxwww日本高清 | 国产精品久久久久久五月尺 | 岛国在线123456 | 日本在线视频网 | 欧美人成人亚洲专区中文字幕 | 黄色天堂| 91久色视频 | 欧美精品专区第1页 | 日韩精品中文字幕一区三区 | 国产成人综合精品一区 | 欧美另类videosbestsex日本 | 亚洲精品国产专区一区 | 久久成人性色生活片 |