Provided by: drbd-utils_9.22.0-1.1_amd64 

NAME
drbdsetup - Setup tool for DRBD
SYNOPSIS
drbdsetup new-resource resource [--cpu-mask {val}] [--on-no-data-accessible {io-error | suspend-io}]
drbdsetup new-minor resource minor volume
drbdsetup del-resource resource
drbdsetup del-minor minor
drbdsetup attach minor lower_dev meta_data_dev meta_data_index [--size {val}] [--max-bio-bvecs {val}]
[--on-io-error {pass_on | call-local-io-error | detach}]
[--fencing {dont-care | resource-only | resource-and-stonith}] [--disk-barrier]
[--disk-flushes] [--disk-drain] [--md-flushes] [--resync-rate {val}] [--resync-after {val}]
[--al-extents {val}] [--al-updates] [--discard-zeroes-if-aligned] [--disable-write-same]
[--c-plan-ahead {val}] [--c-delay-target {val}] [--c-fill-target {val}] [--c-max-rate {val}]
[--c-min-rate {val}] [--disk-timeout {val}]
[--read-balancing {prefer-local | prefer-remote | round-robin | least-pending | when-congested-remote | 32K-striping | 64K-striping | 128K-striping | 256K-striping | 512K-striping | 1M-striping}]
[--rs-discard-granularity {val}]
drbdsetup connect resource local_addr remote_addr [--tentative] [--discard-my-data]
[--protocol {A | B | C}] [--timeout {val}] [--max-epoch-size {val}] [--max-buffers {val}]
[--unplug-watermark {val}] [--connect-int {val}] [--ping-int {val}] [--sndbuf-size {val}]
[--rcvbuf-size {val}] [--ko-count {val}] [--allow-two-primaries] [--cram-hmac-alg {val}]
[--shared-secret {val}]
[--after-sb-0pri {disconnect | discard-younger-primary | discard-older-primary | discard-zero-changes | discard-least-changes | discard-local | discard-remote}]
[--after-sb-1pri {disconnect | consensus | discard-secondary | call-pri-lost-after-sb | violently-as0p}]
[--after-sb-2pri {disconnect | call-pri-lost-after-sb | violently-as0p}] [--always-asbp]
[--rr-conflict {disconnect | call-pri-lost | violently}] [--ping-timeout {val}]
[--data-integrity-alg {val}] [--tcp-cork] [--on-congestion {block | pull-ahead | disconnect}]
[--congestion-fill {val}] [--congestion-extents {val}] [--csums-alg {val}]
[--csums-after-crash-only] [--verify-alg {val}] [--use-rle] [--socket-check-timeout {val}]
drbdsetup disk-options minor [--on-io-error {pass_on | call-local-io-error | detach}]
[--fencing {dont-care | resource-only | resource-and-stonith}] [--disk-barrier]
[--disk-flushes] [--disk-drain] [--md-flushes] [--resync-rate {val}] [--resync-after {val}]
[--al-extents {val}] [--al-updates] [--discard-zeroes-if-aligned] [--disable-write-same]
[--c-plan-ahead {val}] [--c-delay-target {val}] [--c-fill-target {val}] [--c-max-rate {val}]
[--c-min-rate {val}] [--disk-timeout {val}]
[--read-balancing {prefer-local | prefer-remote | round-robin | least-pending | when-congested-remote | 32K-striping | 64K-striping | 128K-striping | 256K-striping | 512K-striping | 1M-striping}]
[--rs-discard-granularity {val}]
drbdsetup net-options local_addr remote_addr [--protocol {A | B | C}] [--timeout {val}]
[--max-epoch-size {val}] [--max-buffers {val}] [--unplug-watermark {val}] [--connect-int {val}]
[--ping-int {val}] [--sndbuf-size {val}] [--rcvbuf-size {val}] [--ko-count {val}]
[--allow-two-primaries] [--cram-hmac-alg {val}] [--shared-secret {val}]
[--after-sb-0pri {disconnect | discard-younger-primary | discard-older-primary | discard-zero-changes | discard-least-changes | discard-local | discard-remote}]
[--after-sb-1pri {disconnect | consensus | discard-secondary | call-pri-lost-after-sb | violently-as0p}]
[--after-sb-2pri {disconnect | call-pri-lost-after-sb | violently-as0p}] [--always-asbp]
[--rr-conflict {disconnect | call-pri-lost | violently}] [--ping-timeout {val}]
[--data-integrity-alg {val}] [--tcp-cork] [--on-congestion {block | pull-ahead | disconnect}]
[--congestion-fill {val}] [--congestion-extents {val}] [--csums-alg {val}]
[--csums-after-crash-only] [--verify-alg {val}] [--use-rle] [--socket-check-timeout {val}]
drbdsetup resource-options resource [--cpu-mask {val}] [--on-no-data-accessible {io-error | suspend-io}]
drbdsetup disconnect local_addr remote_addr [--force]
drbdsetup detach minor [--force]
drbdsetup primary minor [--force]
drbdsetup secondary minor
drbdsetup down resource
drbdsetup verify minor [--start {val}] [--stop {val}]
drbdsetup invalidate minor
drbdsetup invalidate-remote minor
drbdsetup wait-connect minor [--wfc-timeout {val}] [--degr-wfc-timeout {val}]
[--outdated-wfc-timeout {val}] [--wait-after-sb {val}]
drbdsetup wait-sync minor [--wfc-timeout {val}] [--degr-wfc-timeout {val}] [--outdated-wfc-timeout {val}]
[--wait-after-sb {val}]
drbdsetup role minor
drbdsetup cstate minor
drbdsetup dstate minor
drbdsetup resize minor [--size {val}] [--assume-peer-has-space] [--assume-clean] [--al-stripes {val}]
[--al-stripe-size-kB {val}]
drbdsetup check-resize minor
drbdsetup pause-sync minor
drbdsetup resume-sync minor
drbdsetup outdate minor
drbdsetup show-gi minor
drbdsetup get-gi minor
drbdsetup show {resource | minor | all}
drbdsetup suspend-io minor
drbdsetup resume-io minor
drbdsetup status {resource | all} [--color {val}]
drbdsetup events2 {resource | all}
drbdsetup events {resource | minor | all}
drbdsetup new-current-uuid minor [--clear-bitmap]
DESCRIPTION
drbdsetup is used to associate DRBD devices with their backing block devices, to set up DRBD device pairs
to mirror their backing block devices, and to inspect the configuration of running DRBD devices.
NOTE
drbdsetup is a low level tool of the DRBD program suite. It is used by the data disk and drbd scripts to
communicate with the device driver.
COMMANDS
Each drbdsetup sub-command might require arguments and bring its own set of options. All values have
default units which might be overruled by K, M or G. These units are defined in the usual way (e.g. K =
2^10 = 1024).
Common options
All drbdsetup sub-commands accept these two options
--create-device
In case the specified DRBD device (minor number) does not exist yet, create it implicitly.
new-resource
Resources are the primary objects of any DRBD configuration. A resource must be created with the
new-resource command before any volumes or minor devices can be created. Connections are referenced by
name.
new-minor
A minor is used as a synonym for replicated block device. It is represented in the /dev/ directory by a
block device. It is the application's interface to the DRBD-replicated block devices. These block devices
get addressed by their minor numbers on the drbdsetup commandline.
A pair of replicated block devices may have different minor numbers on the two machines. They are
associated by a common volume-number. Volume numbers are local to each connection. Minor numbers are
global on one node.
del-resource
Destroys a resource object. This is only possible if the resource has no volumes.
del-minor
Minors can only be destroyed if its disk is detached.
attach, disk-options
Attach associates device with lower_device to store its data blocks on. The -d (or --disk-size) should
only be used if you wish not to use as much as possible from the backing block devices. If you do not use
-d, the device is only ready for use as soon as it was connected to its peer once. (See the net command.)
With the disk-options command it is possible to change the options of a minor while it is attached.
--disk-size size
You can override DRBD's size determination method with this option. If you need to use the device
before it was ever connected to its peer, use this option to pass the size of the DRBD device to the
driver. Default unit is sectors (1s = 512 bytes).
If you use the size parameter in drbd.conf, we strongly recommend to add an explicit unit postfix.
drbdadm and drbdsetup used to have mismatching default units.
--on-io-error err_handler
If the driver of the lower_device reports an error to DRBD, DRBD will mark the disk as inconsistent,
call a helper program, or detach the device from its backing storage and perform all further IO by
requesting it from the peer. The valid err_handlers are: pass_on, call-local-io-error and detach.
--fencing fencing_policy
Under fencing we understand preventive measures to avoid situations where both nodes are primary and
disconnected (AKA split brain).
Valid fencing policies are:
dont-care
This is the default policy. No fencing actions are done.
resource-only
If a node becomes a disconnected primary, it tries to outdate the peer's disk. This is done by
calling the fence-peer handler. The handler is supposed to reach the other node over alternative
communication paths and call 'drbdadm outdate res' there.
resource-and-stonith
If a node becomes a disconnected primary, it freezes all its IO operations and calls its
fence-peer handler. The fence-peer handler is supposed to reach the peer over alternative
communication paths and call 'drbdadm outdate res' there. In case it cannot reach the peer, it
should stonith the peer. IO is resumed as soon as the situation is resolved. In case your handler
fails, you can resume IO with the resume-io command.
--disk-barrier,
--disk-flushes,
--disk-drain
DRBD has four implementations to express write-after-write dependencies to its backing storage
device. DRBD will use the first method that is supported by the backing storage device and that is
not disabled. By default the flush method is used.
Since drbd-8.4.2 disk-barrier is disabled by default because since linux-2.6.36 (or 2.6.32 RHEL6)
there is no reliable way to determine if queuing of IO-barriers works. Dangerous only enable if you
are told so by one that knows for sure.
When selecting the method you should not only base your decision on the measurable performance. In
case your backing storage device has a volatile write cache (plain disks, RAID of plain disks) you
should use one of the first two. In case your backing storage device has battery-backed write cache
you may go with option 3. Option 4 (disable everything, use "none") is dangerous on most IO stacks,
may result in write-reordering, and if so, can theoretically be the reason for data corruption, or
disturb the DRBD protocol, causing spurious disconnect/reconnect cycles. Do not use no-disk-drain.
Unfortunately device mapper (LVM) might not support barriers.
The letter after "wo:" in /proc/drbd indicates with method is currently in use for a device: b, f, d,
n. The implementations:
barrier
The first requires that the driver of the backing storage device support barriers (called 'tagged
command queuing' in SCSI and 'native command queuing' in SATA speak). The use of this method can
be enabled by setting the disk-barrier options to yes.
flush
The second requires that the backing device support disk flushes (called 'force unit access' in
the drive vendors speak). The use of this method can be disabled setting disk-flushes to no.
drain
The third method is simply to let write requests drain before write requests of a new reordering
domain are issued. That was the only implementation before 8.0.9.
none
The fourth method is to not express write-after-write dependencies to the backing store at all,
by also specifying --no-disk-drain. This is dangerous on most IO stacks, may result in
write-reordering, and if so, can theoretically be the reason for data corruption, or disturb the
DRBD protocol, causing spurious disconnect/reconnect cycles. Do not use --no-disk-drain.
--md-flushes
Disables the use of disk flushes and barrier BIOs when accessing the meta data device. See the notes
on --disk-flushes.
--max-bio-bvecs
In some special circumstances the device mapper stack manages to pass BIOs to DRBD that violate the
constraints that are set forth by DRBD's merge_bvec() function and which have more than one bvec. A
known example is: phys-disk -> DRBD -> LVM -> Xen -> misaligned partition (63) -> DomU FS. Then you
might see "bio would need to, but cannot, be split:" in the Dom0's kernel log.
The best workaround is to proper align the partition within the VM (E.g. start it at sector 1024).
That costs 480 KiB of storage. Unfortunately the default of most Linux partitioning tools is to start
the first partition at an odd number (63). Therefore most distributions install helpers for virtual
linux machines will end up with misaligned partitions. The second best workaround is to limit DRBD's
max bvecs per BIO (i.e., the max-bio-bvecs option) to 1, but that might cost performance.
The default value of max-bio-bvecs is 0, which means that there is no user imposed limitation.
--resync-rate rate
To ensure smooth operation of the application on top of DRBD, it is possible to limit the bandwidth
that may be used by background synchronization. The default is 250 KiB/sec, the default unit is
KiB/sec.
--resync-after minor
Start resync on this device only if the device with minor is already in connected state. Otherwise
this device waits in SyncPause state.
--al-extents extents
DRBD automatically performs hot area detection. With this parameter you control how big the hot area
(=active set) can get. Each extent marks 4M of the backing storage. In case a primary node leaves the
cluster unexpectedly, the areas covered by the active set must be resynced upon rejoining of the
failed node. The data structure is stored in the meta-data area, therefore each change of the active
set is a write operation to the meta-data device. A higher number of extents gives longer resync
times but less updates to the meta-data. The default number of extents is 1237. (Minimum: 7, Maximum:
65534)
See also drbd.conf(5) and drbdmeta(8) for additional limitations and necessary preparation.
--al-updates {yes | no}
DRBD's activity log transaction writing makes it possible, that after the crash of a primary node a
partial (bit-map based) resync is sufficient to bring the node back to up-to-date. Setting al-updates
to no might increase normal operation performance but causes DRBD to do a full resync when a crashed
primary gets reconnected. The default value is yes.
--c-plan-ahead plan_time,
--c-fill-target fill_target,
--c-delay-target delay_target,
--c-max-rate max_rate
The dynamic resync speed controller gets enabled with setting plan_time to a positive value. It aims
to fill the buffers along the data path with either a constant amount of data fill_target, or aims to
have a constant delay time of delay_target along the path. The controller has an upper bound of
max_rate.
By plan_time the agility of the controller is configured. Higher values yield for slower/lower
responses of the controller to deviation from the target value. It should be at least 5 times RTT.
For regular data paths a fill_target in the area of 4k to 100k is appropriate. For a setup that
contains drbd-proxy it is advisable to use delay_target instead. Only when fill_target is set to 0
the controller will use delay_target. 5 times RTT is a reasonable starting value. Max_rate should be
set to the bandwidth available between the DRBD-hosts and the machines hosting DRBD-proxy, or to the
available disk-bandwidth.
The default value of plan_time is 0, the default unit is 0.1 seconds. Fill_target has 0 and sectors
as default unit. Delay_target has 1 (100ms) and 0.1 as default unit. Max_rate has 10240 (100MiB/s)
and KiB/s as default unit.
--c-min-rate min_rate
We track the disk IO rate caused by the resync, so we can detect non-resync IO on the lower level
device. If the lower level device seems to be busy, and the current resync rate is above min_rate, we
throttle the resync.
The default value of min_rate is 4M, the default unit is k. If you want to not throttle at all, set
it to zero, if you want to throttle always, set it to one.
-t, --disk-timeout disk_timeout
If the lower-level device on which a DRBD device stores its data does not finish an I/O request
within the defined disk-timeout, DRBD treats this as a failure. The lower-level device is detached,
and the device's disk state advances to Diskless. If DRBD is connected to one or more peers, the
failed request is passed on to one of them.
This option is dangerous and may lead to kernel panic!
"Aborting" requests, or force-detaching the disk, is intended for completely blocked/hung local
backing devices which do no longer complete requests at all, not even do error completions. In this
situation, usually a hard-reset and failover is the only way out.
By "aborting", basically faking a local error-completion, we allow for a more graceful swichover by
cleanly migrating services. Still the affected node has to be rebooted "soon".
By completing these requests, we allow the upper layers to re-use the associated data pages.
If later the local backing device "recovers", and now DMAs some data from disk into the original
request pages, in the best case it will just put random data into unused pages; but typically it will
corrupt meanwhile completely unrelated data, causing all sorts of damage.
Which means delayed successful completion, especially for READ requests, is a reason to panic(). We
assume that a delayed *error* completion is OK, though we still will complain noisily about it.
The default value of disk-timeout is 0, which stands for an infinite timeout. Timeouts are specified
in units of 0.1 seconds. This option is available since DRBD 8.3.12.
--discard-zeroes-if-aligned {yes | no}
Setting discard-zeroes-if-aligned to no will cause DRBD to always fall-back to zero-out on the
receiving side, and to not even announce discard capabilities on the Primary, if the respective
backend announces discard_zeroes_data=false.
Setting discards-zeroes-if-aligned to yes will allow DRBD to use discards, and to announce
discard_zeroes=true, even on backends that announce discard_zeroes_data=false.
We used to ignore the discard_zeroes_data setting completely. To not break established and expected
behaviour, the default value is yes.
This option is available since 8.4.7. See also drbd.conf(5).
--disable-write-same {yes | no}
Some disks announce WRITE_SAME support to the kernel but fail with an I/O error upon actually
receiving such a request. This mostly happens when using virtualized disks -- notably, this behavior
has been observed with VMware's virtual disks.
When disable-write-same is set to yes, WRITE_SAME detection is manually overriden and support is
disabled.
The default value of disable-write-same is no. This option is available since 8.4.7.
--read-balancing method
The supported methods for load balancing of read requests are prefer-local, prefer-remote,
round-robin, least-pending and when-congested-remote, 32K-striping, 64K-striping, 128K-striping,
256K-striping, 512K-striping and 1M-striping.
The default value of read-balancing is prefer-local. This option is available since 8.4.1.
--rs-discard-granularity bytes
When rs-discard-granularity is set to a non zero, positive value then DRBD tries to do a resync
operation in requests of this size. In case such a block contains only zero bytes on the sync source
node, the sync target node will issue a discard/trim/unmap command for the area.
The value is constrained by the discard granularity of the backing block device. In case
rs-discard-granularity is not a multiplier of the discard granularity of the backing block device
DRBD rounds it up. The feature only gets active if the backing block device reads back zeroes after a
discard command.
The default value of rs-discard-granularity is 0. This option is available since 8.4.7.
connect, net-options
Connect sets up the device to listen on af:local_addr:port for incoming connections and to try to connect
to af:remote_addr:port. If port is omitted, 7788 is used as default. If af is omitted ipv4 gets used.
Other supported address families are ipv6, ssocks for Dolphin Interconnect Solutions' "super sockets" and
sdp for Sockets Direct Protocol (Infiniband).
The net-options command allows you to change options while the connection is established.
--protocol protocol
On the TCP/IP link the specified protocol is used. Valid protocol specifiers are A, B, and C.
Protocol A: write IO is reported as completed, if it has reached local disk and local TCP send
buffer.
Protocol B: write IO is reported as completed, if it has reached local disk and remote buffer cache.
Protocol C: write IO is reported as completed, if it has reached both local and remote disk.
--connect-int time
In case it is not possible to connect to the remote DRBD device immediately, DRBD keeps on trying to
connect. With this option you can set the time between two retries. The default value is 10. The unit
is seconds.
--ping-int time
If the TCP/IP connection linking a DRBD device pair is idle for more than time seconds, DRBD will
generate a keep-alive packet to check if its partner is still alive. The default value is 10. The
unit is seconds.
--timeout val
If the partner node fails to send an expected response packet within val tenths of a second, the
partner node is considered dead and therefore the TCP/IP connection is abandoned. The default value
is 60 (= 6 seconds).
--sndbuf-size size
The socket send buffer is used to store packets sent to the secondary node, which are not yet
acknowledged (from a network point of view) by the secondary node. When using protocol A, it might be
necessary to increase the size of this data structure in order to increase asynchronicity between
primary and secondary nodes. But keep in mind that more asynchronicity is synonymous with more data
loss in the case of a primary node failure. Since 8.0.13 resp. 8.2.7 setting the size value to 0
means that the kernel should autotune this. The default size is 0, i.e. autotune.
--rcvbuf-size size
Packets received from the network are stored in the socket receive buffer first. From there they are
consumed by DRBD. Before 8.3.2 the receive buffer's size was always set to the size of the socket
send buffer. Since 8.3.2 they can be tuned independently. A value of 0 means that the kernel should
autotune this. The default size is 0, i.e. autotune.
--ko-count count
In case the secondary node fails to complete a single write request for count times the timeout, it
is expelled from the cluster, i.e. the primary node goes into StandAlone mode. To disable this
feature, you should explicitly set it to 0; defaults may change between versions.
--max-epoch-size val
With this option the maximal number of write requests between two barriers is limited. Typically set
to the same as --max-buffers, or the allowed maximum. Values smaller than 10 can lead to degraded
performance. The default value is 2048.
--max-buffers val
With this option the maximal number of buffer pages allocated by DRBD's receiver thread is limited.
Typically set to the same as --max-epoch-size. Small values could lead to degraded performance. The
default value is 2048, the minimum 32. Increase this if you cannot saturate the IO backend of the
receiving side during linear write or during resync while otherwise idle.
See also drbd.conf(5)
--unplug-watermark val
This setting has no effect with recent kernels that use explicit on-stack plugging (upstream Linux
kernel 2.6.39, distributions may have backported).
When the number of pending write requests on the standby (secondary) node exceeds the
unplug-watermark, we trigger the request processing of our backing storage device. Some storage
controllers deliver better performance with small values, others deliver best performance when the
value is set to the same value as max-buffers, yet others don't feel much effect at all. Minimum 16,
default 128, maximum 131072.
--allow-two-primaries
With this option set you may assign primary role to both nodes. You only should use this option if
you use a shared storage file system on top of DRBD. At the time of writing the only ones are: OCFS2
and GFS. If you use this option with any other file system, you are going to crash your nodes and to
corrupt your data!
--cram-hmac-alg alg
You need to specify the HMAC algorithm to enable peer authentication at all. You are strongly
encouraged to use peer authentication. The HMAC algorithm will be used for the challenge response
authentication of the peer. You may specify any digest algorithm that is named in /proc/crypto.
--shared-secret secret
The shared secret used in peer authentication. May be up to 64 characters.
--after-sb-0pri asb-0p-policy
possible policies are:
disconnect
No automatic resynchronization, simply disconnect.
discard-younger-primary
Auto sync from the node that was primary before the split-brain situation occurred.
discard-older-primary
Auto sync from the node that became primary as second during the split-brain situation.
discard-zero-changes
In case one node did not write anything since the split brain became evident, sync from the node
that wrote something to the node that did not write anything. In case none wrote anything this
policy uses a random decision to perform a "resync" of 0 blocks. In case both have written
something this policy disconnects the nodes.
discard-least-changes
Auto sync from the node that touched more blocks during the split brain situation.
discard-node-NODENAME
Auto sync to the named node.
--after-sb-1pri asb-1p-policy
possible policies are:
disconnect
No automatic resynchronization, simply disconnect.
consensus
Discard the version of the secondary if the outcome of the after-sb-0pri algorithm would also
destroy the current secondary's data. Otherwise disconnect.
discard-secondary
Discard the secondary's version.
call-pri-lost-after-sb
Always honor the outcome of the after-sb-0pri algorithm. In case it decides the current secondary
has the correct data, call the pri-lost-after-sb on the current primary.
violently-as0p
Always honor the outcome of the after-sb-0pri algorithm. In case it decides the current secondary
has the correct data, accept a possible instantaneous change of the primary's data.
--after-sb-2pri asb-2p-policy
possible policies are:
disconnect
No automatic resynchronization, simply disconnect.
call-pri-lost-after-sb
Always honor the outcome of the after-sb-0pri algorithm. In case it decides the current secondary
has the right data, call the pri-lost-after-sb on the current primary.
violently-as0p
Always honor the outcome of the after-sb-0pri algorithm. In case it decides the current secondary
has the right data, accept a possible instantaneous change of the primary's data.
--always-asbp
Normally the automatic after-split-brain policies are only used if current states of the UUIDs do not
indicate the presence of a third node.
With this option you request that the automatic after-split-brain policies are used as long as the
data sets of the nodes are somehow related. This might cause a full sync, if the UUIDs indicate the
presence of a third node. (Or double faults have led to strange UUID sets.)
--rr-conflict role-resync-conflict-policy
This option sets DRBD's behavior when DRBD deduces from its meta data that a resynchronization is
needed, and the SyncTarget node is already primary. The possible settings are: disconnect,
call-pri-lost and violently. While disconnect speaks for itself, with the call-pri-lost setting the
pri-lost handler is called which is expected to either change the role of the node to secondary, or
remove the node from the cluster. The default is disconnect.
With the violently setting you allow DRBD to force a primary node into SyncTarget state. This means
that the data exposed by DRBD changes to the SyncSource's version of the data instantaneously. USE
THIS OPTION ONLY IF YOU KNOW WHAT YOU ARE DOING.
--data-integrity-alg hash_alg
DRBD can ensure the data integrity of the user's data on the network by comparing hash values.
Normally this is ensured by the 16 bit checksums in the headers of TCP/IP packets. This option can be
set to any of the kernel's data digest algorithms. In a typical kernel configuration you should have
at least one of md5, sha1, and crc32c available. By default this is not enabled.
See also the notes on data integrity on the drbd.conf manpage.
--no-tcp-cork
DRBD usually uses the TCP socket option TCP_CORK to hint to the network stack when it can expect more
data, and when it should flush out what it has in its send queue. There is at least one network stack
that performs worse when one uses this hinting method. Therefore we introduced this option, which
disable the setting and clearing of the TCP_CORK socket option by DRBD.
--ping-timeout ping_timeout
The time the peer has to answer to a keep-alive packet. In case the peer's reply is not received
within this time period, it is considered dead. The default unit is tenths of a second, the default
value is 5 (for half a second).
--discard-my-data
Use this option to manually recover from a split-brain situation. In case you do not have any
automatic after-split-brain policies selected, the nodes refuse to connect. By passing this option
you make this node a sync target immediately after successful connect.
--tentative
Causes DRBD to abort the connection process after the resync handshake, i.e. no resync gets
performed. You can find out which resync DRBD would perform by looking at the kernel's log file.
--on-congestion congestion_policy,
--congestion-fill fill_threshold,
--congestion-extents active_extents_threshold
By default DRBD blocks when the available TCP send queue becomes full. That means it will slow down
the application that generates the write requests that cause DRBD to send more data down that TCP
connection.
When DRBD is deployed with DRBD-proxy it might be more desirable that DRBD goes into AHEAD/BEHIND
mode shortly before the send queue becomes full. In AHEAD/BEHIND mode DRBD does no longer replicate
data, but still keeps the connection open.
The advantage of the AHEAD/BEHIND mode is that the application is not slowed down, even if
DRBD-proxy's buffer is not sufficient to buffer all write requests. The downside is that the peer
node falls behind, and that a resync will be necessary to bring it back into sync. During that resync
the peer node will have an inconsistent disk.
Available congestion_policys are block and pull-ahead. The default is block. Fill_threshold might be
in the range of 0 to 10GiBytes. The default is 0 which disables the check. Active_extents_threshold
has the same limits as al-extents.
The AHEAD/BEHIND mode and its settings are available since DRBD 8.3.10.
--verify-alg hash-alg
During online verification (as initiated by the verify sub-command), rather than doing a bit-wise
comparison, DRBD applies a hash function to the contents of every block being verified, and compares
that hash with the peer. This option defines the hash algorithm being used for that purpose. It can
be set to any of the kernel's data digest algorithms. In a typical kernel configuration you should
have at least one of md5, sha1, and crc32c available. By default this is not enabled; you must set
this option explicitly in order to be able to use on-line device verification.
See also the notes on data integrity on the drbd.conf manpage.
--csums-alg hash-alg
A resync process sends all marked data blocks from the source to the destination node, as long as no
csums-alg is given. When one is specified the resync process exchanges hash values of all marked
blocks first, and sends only those data blocks over, that have different hash values.
This setting is useful for DRBD setups with low bandwidth links. During the restart of a crashed
primary node, all blocks covered by the activity log are marked for resync. But a large part of those
will actually be still in sync, therefore using csums-alg will lower the required bandwidth in
exchange for CPU cycles.
--use-rle
During resync-handshake, the dirty-bitmaps of the nodes are exchanged and merged (using bit-or), so
the nodes will have the same understanding of which blocks are dirty. On large devices, the fine
grained dirty-bitmap can become large as well, and the bitmap exchange can take quite some time on
low-bandwidth links.
Because the bitmap typically contains compact areas where all bits are unset (clean) or set (dirty),
a simple run-length encoding scheme can considerably reduce the network traffic necessary for the
bitmap exchange.
For backward compatibility reasons, and because on fast links this possibly does not improve transfer
time but consumes cpu cycles, this defaults to off.
Introduced in 8.3.2.
--socket-check-timeout
In setups involving a DRBD-proxy and connections that experience a lot of buffer-bloat it might be
necessary to set ping-timeout to an unusual high value. By default DRBD uses the same value to wait
if a newly established TCP-connection is stable. Since the DRBD-proxy is usually located in the same
data center such a long wait time may hinder DRBD's connect process.
In such setups socket-check-timeout should be set to at least to the round trip time between DRBD and
DRBD-proxy. I.e. in most cases to 1.
The default unit is tenths of a second, the default value is 0 (which causes DRBD to use the value of
ping-timeout instead). Introduced in 8.4.5.
resource-options
Changes the options of the resource at runtime.
--cpu-mask cpu-mask
Sets the cpu-affinity-mask for DRBD's kernel threads of this device. The default value of cpu-mask is
0, which means that DRBD's kernel threads should be spread over all CPUs of the machine. This value
must be given in hexadecimal notation. If it is too big it will be truncated.
--on-no-data-accessible ond-policy
This setting controls what happens to IO requests on a degraded, disk less node (I.e. no data store
is reachable). The available policies are io-error and suspend-io.
If ond-policy is set to suspend-io you can either resume IO by attaching/connecting the last lost
data storage, or by the drbdadm resume-io res command. The latter will result in IO errors of course.
The default is io-error. This setting is available since DRBD 8.3.9.
primary
Sets the device into primary role. This means that applications (e.g. a file system) may open the device
for read and write access. Data written to the device in primary role are mirrored to the device in
secondary role.
Normally it is not possible to set both devices of a connected DRBD device pair to primary role. By using
the --allow-two-primaries option, you override this behavior and instruct DRBD to allow two primaries.
--overwrite-data-of-peer
Alias for --force.
--force
Becoming primary fails if the local replica is not up-to-date. I.e. when it is inconsistent, outdated
of consistent. By using this option you can force it into primary role anyway. USE THIS OPTION ONLY
IF YOU KNOW WHAT YOU ARE DOING.
secondary
Brings the device into secondary role. This operation fails as long as at least one application (or file
system) has opened the device.
It is possible that both devices of a connected DRBD device pair are secondary.
verify
This initiates on-line device verification. During on-line verification, the contents of every block on
the local node are compared to those on the peer node. Device verification progress can be monitored via
/proc/drbd. Any blocks whose content differs from that of the corresponding block on the peer node will
be marked out-of-sync in DRBD's on-disk bitmap; they are not brought back in sync automatically. To do
that, simply disconnect and reconnect the resource.
If on-line verification is already in progress (and this node is "VerifyS"), this command silently
"succeeds". In this case, any start-sector (see below) will be ignored, and any stop-sector (see below)
will be honored. This can be used to stop a running verify, or to update/shorten/extend the coverage of
the currently running verify.
This command will fail if the device is not part of a connected device pair.
See also the notes on data integrity on the drbd.conf manpage.
--start start-sector
Since version 8.3.2, on-line verification should resume from the last position after connection loss.
It may also be started from an arbitrary position by setting this option. If you had reached some
stop-sector before, and you do not specify an explicit start-sector, verify should resume from the
previous stop-sector.
Default unit is sectors. You may also specify a unit explicitly. The start-sector will be rounded
down to a multiple of 8 sectors (4kB).
-S, --stop stop-sector
Since version 8.3.14, on-line verification can be stopped before it reaches end-of-device.
Default unit is sectors. You may also specify a unit explicitly. The stop-sector may be updated by
issuing an additional drbdsetup verify command on the same node while the verify is running. This can
be used to stop a running verify, or to update/shorten/extend the coverage of the currently running
verify.
invalidate
This forces the local device of a pair of connected DRBD devices into SyncTarget state, which means that
all data blocks of the device are copied over from the peer.
This command will fail if the device is not either part of a connected device pair, or disconnected
Secondary.
invalidate-remote
This forces the local device of a pair of connected DRBD devices into SyncSource state, which means that
all data blocks of the device are copied to the peer.
On a disconnected Primary device, this will set all bits in the out of sync bitmap. As a side affect this
suspends updates to the on disk activity log. Updates to the on disk activity log resume automatically
when necessary.
wait-connect
Returns as soon as the device can communicate with its partner device.
--wfc-timeout wfc_timeout,
--degr-wfc-timeout degr_wfc_timeout,
--outdated-wfc-timeout outdated_wfc_timeout,
--wait-after-sb
This command will fail if the device cannot communicate with its partner for timeout seconds. If the
peer was working before this node was rebooted, the wfc_timeout is used. If the peer was already down
before this node was rebooted, the degr_wfc_timeout is used. If the peer was successfully outdated
before this node was rebooted the outdated_wfc_timeout is used. The default value for all those
timeout values is 0 which means to wait forever. The unit is seconds. In case the connection status
goes down to StandAlone because the peer appeared but the devices had a split brain situation, the
default for the command is to terminate. You can change this behavior with the --wait-after-sb
option.
wait-sync
Returns as soon as the device leaves any synchronization into connected state. The options are the same
as with the wait-connect command.
disconnect
Removes the information set by the net command from the device. This means that the device goes into
unconnected state and will no longer listen for incoming connections.
detach
Removes the information set by the disk command from the device. This means that the device is detached
from its backing storage device.
-f, --force
A regular detach returns after the disk state finally reached diskless. As a consequence detaching
from a frozen backing block device never terminates.
On the other hand A forced detach returns immediately. It allows you to detach DRBD from a frozen
backing block device. Please note that the disk will be marked as failed until all pending IO
requests where finished by the backing block device.
down
Removes all configuration information from the device and forces it back to unconfigured state.
role
Shows the current roles of the device and its peer, as local/peer.
state
Deprecated alias for "role"
cstate
Shows the current connection state of the device.
dstate
Shows the current states of the backing storage devices, as local/peer.
resize
This causes DRBD to reexamine the size of the device's backing storage device. To actually do online
growing you need to extend the backing storages on both devices and call the resize command on one of
your nodes.
The --size option can be used to online shrink the usable size of a drbd device. It's the users
responsibility to make sure that a file system on the device is not truncated by that operation.
The --assume-peer-has-space allows you to resize a device which is currently not connected to the peer.
Use with care, since if you do not resize the peer's disk as well, further connect attempts of the two
will fail.
When the --assume-clean option is given DRBD will skip the resync of the new storage. Only do this if you
know that the new storage was initialized to the same content by other means.
The options --al-stripes and --al-stripe-size-kB may be used to change the layout of the activity log
online. In case of internal meta data this may invovle shrinking the user visible size at the same time
(unsing the --size) or increasing the avalable space on the backing devices.
check-resize
To enable DRBD to detect offline resizing of backing devices this command may be used to record the
current size of backing devices. The size is stored in files in /var/lib/drbd/ named drbd-minor-??.lkbd
This command is called by drbdadm resize res after drbdsetup device resize returned.
pause-sync
Temporarily suspend an ongoing resynchronization by setting the local pause flag. Resync only progresses
if neither the local nor the remote pause flag is set. It might be desirable to postpone DRBD's
resynchronization after eventual resynchronization of the backing storage's RAID setup.
resume-sync
Unset the local sync pause flag.
outdate
Mark the data on the local backing storage as outdated. An outdated device refuses to become primary.
This is used in conjunction with fencing and by the peer's fence-peer handler.
show-gi
Displays the device's data generation identifiers verbosely.
get-gi
Displays the device's data generation identifiers.
show
Shows all available configuration information of a resource, or of all resources. Available options:
--show-defaults
Show all configuration parameters, even the ones with default values. Normally, parameters with
default values are not shown.
suspend-io
This command is of no apparent use and just provided for the sake of completeness.
resume-io
If the fence-peer handler fails to stonith the peer node, and your fencing policy is set to
resource-and-stonith, you can unfreeze IO operations with this command.
status
Show the status of a resource, or of all resources. The output consists of one paragraph for each
configured resource. Each paragraph contains one line for each resource, followed by one line for each
device, and one line for each connection. The device and connection lines are indented. The connection
lines are followed by one line for each peer device; these lines are indented against the connection
line.
Long lines are wrapped around at terminal width, and indented to indicate how the lines belongs together.
Available options:
--verbose
Include more information in the output even when it is likely redundant or irrelevant.
--statistics
Include data transfer statistics in the output.
--color={always | auto | never}
Colorize the output. With --color=auto, drbdsetup emits color codes only when standard output is
connected to a terminal.
For example, the non-verbose output for a resource with only one connection and only one volume could
look like this:
fs-backoffice role:Primary
disk:UpToDate
peer role:Secondary
replication:Established peer-disk:UpToDate
With the --verbose --statistics options, the same resource could be reported as:
fs-data role:Primary suspended:no
write-ordering:drain
volume:0 minor:1 disk:UpToDate
size:10616472 read:134465 written:144800 al-writes:18 bm-writes:0
upper-pending:0 lower-pending:0 al-suspended:no blocked:no
peer connection:Connected role:Secondary congested:no
volume:0 replication:Established peer-disk:UpToDate resync-suspended:no
received:122596 sent:22204 out-of-sync:0 pending:0 unacked:0
events2
Show the current state of all configured DRBD objects, followed by all changes to the state.
The output format is meant to be human as well as machine readable. Each line starts with the event
number, which is followed by an asterisk if the event continues in the next line. The second word in each
line indicates the kind of event: exists for an existing object; create, destroy, and change if an object
is created, destroyed, or changed; or call or response if an event handler is called or it returns. The
third word indicates the object the event applies to: resource, device, connection, peer-device, helper,
or a dash (-) to indicate that the current state has been dumped completely.
The remaining words identify the object and describe the state that he object is in. Available options:
--now
Terminate after reporting the current state. The default is to continuously listen and report state
changes.
--statistics
Include statistics in the output.
events
Deprecated. If possible, change to the events2 subcommand instead.
Displays every state change of DRBD and all calls to helper programs. This might be used to get notified
of DRBD's state changes by piping the output to another program.
--all-devices
Display the events of all DRBD minors.
--unfiltered
This is a debugging aid that displays the content of all received netlink messages.
new-current-uuid
Generates a new current UUID and rotates all other UUID values. This has at least two use cases, namely
to skip the initial sync, and to reduce network bandwidth when starting in a single node configuration
and then later (re-)integrating a remote site.
Available option:
--clear-bitmap
Clears the sync bitmap in addition to generating a new current UUID.
This can be used to skip the initial sync, if you want to start from scratch. This use-case does only
work on "Just Created" meta data. Necessary steps:
1. On both nodes, initialize meta data and configure the device.
drbdadm -- --force create-md res
2. They need to do the initial handshake, so they know their sizes.
drbdadm up res
3. They are now Connected Secondary/Secondary Inconsistent/Inconsistent. Generate a new current-uuid and
clear the dirty bitmap.
drbdadm new-current-uuid --clear-bitmap res
4. They are now Connected Secondary/Secondary UpToDate/UpToDate. Make one side primary and create a file
system.
drbdadm primary res
mkfs -t fs-type $(drbdadm sh-dev res)
One obvious side-effect is that the replica is full of old garbage (unless you made them identical using
other means), so any online-verify is expected to find any number of out-of-sync blocks.
You must not use this on pre-existing data! Even though it may appear to work at first glance, once you
switch to the other node, your data is toast, as it never got replicated. So do not leave out the mkfs
(or equivalent).
This can also be used to shorten the initial resync of a cluster where the second node is added after the
first node is gone into production, by means of disk shipping. This use-case works on disconnected
devices only, the device may be in primary or secondary role.
The necessary steps on the current active server are:
1. drbdsetup new-current-uuid --clear-bitmap minor
2. Take the copy of the current active server. E.g. by pulling a disk out of the RAID1 controller, or by
copying with dd. You need to copy the actual data, and the meta data.
3. drbdsetup new-current-uuid minor
Now add the disk to the new secondary node, and join it to the cluster. You will get a resync of that
parts that were changed since the first call to drbdsetup in step 1.
EXAMPLES
For examples, please have a look at the DRBD User's Guide[1].
VERSION
This document was revised for version 8.3.2 of the DRBD distribution.
AUTHOR
Written by Philipp Reisner <philipp.reisner@linbit.com> and Lars Ellenberg <lars.ellenberg@linbit.com>
REPORTING BUGS
Report bugs to <drbd-user@lists.linbit.com>.
COPYRIGHT
Copyright 2001-2008 LINBIT Information Technologies, Philipp Reisner, Lars Ellenberg. This is free
software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or
FITNESS FOR A PARTICULAR PURPOSE.
SEE ALSO
drbd.conf(5), drbd(8), drbddisk(8), drbdadm(8), DRBD User's Guide[1], DRBD web site[2]
NOTES
1. DRBD User's Guide
http://www.drbd.org/users-guide/
2. DRBD web site
http://www.drbd.org/
DRBD 8.4.0 6 May 2011 DRBDSETUP(8)