You are on page 1of 7

REPLICATION PART 2:

--------------------------

=> voting options


=> Write Concern
=> Capacity planning
=> Deployment Options

===============================
RECONFIGURING A REPLICATION SET
-------------------------------

Config options

rs.conf()
reconfig: also possible when a server is down

replSetReconfig
rs.reconfig()

===============================
ARBITERS:
--------------------------

############
cfg = {
_id : "abc",
members : [
{ _id:0 , host : "<ip_of_hostname>:27001" , <options>},
{ _id:1 , host : "<ip_of_hostname>:27002" , arbiterOnly:true },
{ _id:2 , host : "<ip_of_hostname>:27003"}
]
}
############

most common option is [ arbiterOnly:true ]

also take part in election for primary

if n/w is down

the server on whose side is all the world connected wins

split brain problem solved using arbiter

===============================
Priority Options:
--------------------------

############
cfg = {
_id : "abc",
members : [
{ _id:0 , host : "<ip_of_hostname>:27001" , <options>},
{ _id:1 , host : "<ip_of_hostname>:27002" , Priority:<n> },
{ _id:2 , host : "<ip_of_hostname>:27003"}
]
}
############

=> priority is used to create a bias


=> higher the number , higher the chances of becoming primary
=> Zero is for never primary
=> by default priority is 1.

===============================
Hidden Options and slave Delay:
--------------------------
############
cfg = {
_id : "abc",
members : [
{ _id:0 , host : "<ip_of_hostname>:27001" , <options>},
{ _id:1 , host : "<ip_of_hostname>:27002" , hidden:<bool> },
{ _id:2 , host : "<ip_of_hostname>:27003"}
]
}
############

############
cfg = {
_id : "abc",
members : [
{ _id:0 , host : "<ip_of_hostname>:27001" , <options>},
{ _id:1 , host : "<ip_of_hostname>:27002" , slaveDelay:<seconds> },
{ _id:2 , host : "<ip_of_hostname>:27003"}
]
}
############

=> kind of like a rolling backup


=> it will be specified time behind the real time data

===============================
Voting Options:
--------------------------

##preferred not to use


############
cfg = {
_id : "abc",
members : [
{ _id:0 , host : "<ip_of_hostname>:27001" , <options>},
{ _id:1 , host : "<ip_of_hostname>:27002" , votes:<n> },
{ _id:2 , host : "<ip_of_hostname>:27003"}
]
}
############

1 - 0 /// better than /// 2-1


===============================
Applied Reconfiguration:
--------------------------
making 3 member slavedelay

##from primary
var cfg = rs.conf()
cfg
cfg.members[2].slaveDelay = true;
cfg

rs.reconfig(cfg)

cfg = rs.conf()

## 3 server to never be primary

cfg.members[2]
cfg.members[2].priority=0;

rs.reconfig(cfg);

### with member down

rs.status()
cfg = rs.conf()
cfg.members[2].slaveDelay = 8 * 3600
cfg.members[2].hidden = true

cfg

rs.isMaster()

mongo --port 27003


rs.conf()
rs.slaveOK()
db.foo.count()

help connect

var server2 = new Mongo('localhost:27002')


server2

db.isMaster().me

server2_test.foo.count().setSlaveOk()

server2_test.foo.count()

db.oplog.rs.count()

===============================
WRITE CONCERN PRINCIPLES:
--------------------------
CLuster wide commit
and
Write Concern

=> majority of servers =commited = durable


write on primary =>

principles:
---------
1.) write is truly commited upon application @ a majority of the set.
2.) We can get acknowlegement of this

db.foo.insert({x:})
db.getLastError({w: 'majority', wtimeout: 8000})

===============================
Examining 'w' parameter
--------------------------

=> important to set write concern ({w: 'majority', wtimeout: 8000})

write rconcern
1.) no call to GLE (fire and forget { no call to get last error})
2.) w : 1 { not super critical}
3.) w = 'majority' { most things that are important}
4.) w :3 (all) { flow control : bulk imports : slow machines : etc}

variation: "call every N "

===============================
Write Concern USE CASES & Patterns
--------------------------

=> page view counter (no user input)

=> logging

=>

write rconcern
1.) no call to GLE (fire and forget { no call to get last error})
2.) w : 1 { not super critical : dupkey}
3.) w = 'majority' { most things that are important}
4.) w :3 (all) { flow control : bulk imports : slow machines : etc}

variation: "call every N "

for (i = 0 , i < 100000 ; i ++) {


db.foo.insert(arr[i]);
if (i % 500 == )

5.) w : <tag>

arbiter doesn't count for getLastError / WriteConcern with w = 3

===============================
REXAMINING THE PAGE VIEW COUNTER PATTERN
--------------------------
=> w1 w2 w3 w4 GLE

=> w1 GLE W2 W3 W4 GLE ( more preferreable)


every Nth + sharding

use write concern


use Write majority
use tune if and only if slow
call GLE when job ends

===============================
w TIMEOUT & CAPACITY PLANNING
--------------------------
=> Batch inserts
=> getLastError( { w: 'majority' , wtimeout : 8000 } )

connection pile up on slowness

===============================
REPLICA SETS IN A SINGLE DATACENTER
--------------------------

Recommended configurations for replica sets

Limits per set


=> no more than 12 members
=> no more than 7 voters

===============================
Mixed Storage Engine REplica set
--------------------------

Different Storage engine for differnet members

Replication sends operations from primary, not bytes

Reasons for crating a mixed replica set


=> testing
=> upgrading

===============================
EX 5.1
--------------------------
Step 1 :

mongod --port 27004 --replSet turing --dbpath C:\data\db\4


mongod --port 27005 --replSet turing --dbpath C:\data\db\5
mongod --port 27006 --replSet turing --dbpath C:\data\db\6

Step 2:

cfg = {
_id : "turing",
members : [
{ _id:0 , host : "ATQ38RM622:27004" , },
{ _id:1 , host : "ATQ38RM622:27005" , arbiterOnly:true },
{ _id:2 , host : "ATQ38RM622:27006"}
]
}

Step 3:

rs.reconfig(cfg)
rs.initaiate()
rs.status()

################################################################

{
"set" : "turing",
"date" : ISODate("2017-09-11T05:44:07.226Z"),
"myState" : 1,
"term" : NumberLong(1),
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1505108643, 1),
"t" : NumberLong(1)
},
"appliedOpTime" : {
"ts" : Timestamp(1505108643, 1),
"t" : NumberLong(1)
},
"durableOpTime" : {
"ts" : Timestamp(1505108643, 1),
"t" : NumberLong(1)
}
},
"members" : [
{
"_id" : 0,
"name" : "ATQ38RM622:27004",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 358,
"optime" : {
"ts" : Timestamp(1505108643, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2017-09-11T05:44:03Z"),
"electionTime" : Timestamp(1505108382, 1),
"electionDate" : ISODate("2017-09-11T05:39:42Z"),
"configVersion" : 2,
"self" : true
},
{
"_id" : 1,
"name" : "ATQ38RM622:27005",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 102,
"lastHeartbeat" : ISODate("2017-09-11T05:44:06.243Z"),
"lastHeartbeatRecv" : ISODate("2017-09-11T05:44:06.522Z"
),
"pingMs" : NumberLong(0),
"configVersion" : 2
},
{
"_id" : 2,
"name" : "ATQ38RM622:27006",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 102,
"optime" : {
"ts" : Timestamp(1505108643, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1505108643, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2017-09-11T05:44:03Z"),
"optimeDurableDate" : ISODate("2017-09-11T05:44:03Z"),
"lastHeartbeat" : ISODate("2017-09-11T05:44:06.245Z"),
"lastHeartbeatRecv" : ISODate("2017-09-11T05:44:06.899Z"
),
"pingMs" : NumberLong(0),
"syncingTo" : "ATQ38RM622:27004",
"configVersion" : 2
}
],
"ok" : 1
}
===============================
Ex 5.2
--------------------------
===============================
Ex 5.4
--------------------------

rs.reconfig(cfg, {force : true})


===============================
--------------------------
===============================
--------------------------
===============================

You might also like