現在動いているカーネルの確認。
$ uname --kernel-release 4.18.0-0.bpo.1-amd64
使用中のカーネルの origin を確認すると、backport のカーネルを使っていることがわかる。
$ apt-cache policy "$(uname --kernel-release)"
linux-headers-4.18.0-0.bpo.1-amd64:
Installed: 4.18.6-1~bpo9+1
Candidate: 4.18.6-1~bpo9+1
Version table:
*** 4.18.6-1~bpo9+1 750
100 http://ftp.jp.debian.org/debian stretch-backports/main amd64 Packages
100 /var/lib/dpkg/status
linux-latest-modules-4.18.0-0.bpo.1-amd64:
Installed: (none)
Candidate: (none)
Version table:
linux-image-4.18.0-0.bpo.1-amd64:
Installed: 4.18.6-1~bpo9+1
Candidate: 4.18.6-1~bpo9+1
Version table:
*** 4.18.6-1~bpo9+1 750
100 http://ftp.jp.debian.org/debian stretch-backports/main amd64 Packages
100 /var/lib/dpkg/status
linux-image-4.18.0-0.bpo.1-amd64-dbg:
Installed: (none)
Candidate: 4.18.6-1~bpo9+1
Version table:
4.18.6-1~bpo9+1 750
100 http://ftp.jp.debian.org/debian stretch-backports/main amd64 Packages
zfs-dkms をインストールする前に policy 確認してインストールされるバージョンを確認。この内容から apt-get install zfs-dkms した場合には backport のバージョンを選択されることがわかる。
$ apt-cache policy -- zfs-dkms
zfs-dkms:
Installed: (none)
Candidate: 0.6.5.9-5
0.7.11-1~bpo9+1 100
100 http://ftp.jp.debian.org/debian stretch-backports/contrib amd64 Packages
100 /var/lib/dpkg/status
0.6.5.9-5 990
990 http://ftp.jp.debian.org/debian stretch/contrib amd64 Packages
というわけで、インストール時には明示的に backports からパッケージを借りてくる。
# apt-get install -- zfs-dkms/stretch-backports
依存関係の解決とかビルドに問題が生じるので、いろいろパッケージを再インストール。
# apt-get \ --reinstall \ install \ -- \ linux-headers-amd64/stretch-backports \ linux-image-amd64/stretch-backports \ linux-compiler-gcc-6-x86/stretch-backports \ zfs-dkms/stretch-backports \ spl-dkms/stretch-backports \ ;
再起動すると、degreded で起動。debian の bts によれば、この時 degraded で起動するのは、zfs パーティションが存在しないからなので、正常。
$ systemctl status -l | head
● ****
State: degraded
Jobs: 0 queued
Failed: 3 units
Since: Mon 2018-11-26 03:06:43 JST; 55min ago
CGroup: /
├─user.slice
│ ├─user-1001.slice
│ │ ├─session-2.scope
│ │ │ ├─1734 sshd: **** [priv]
# systemctl status zfs-mount.service zfs-share.service zfs-zed.service
● zfs-mount.service - Mount ZFS filesystems
Loaded: loaded (/lib/systemd/system/zfs-mount.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2018-11-26 03:06:45 JST; 54min ago
Docs: man:zfs(8)
Process: 772 ExecStart=/sbin/zfs mount -a (code=exited, status=1/FAILURE)
Main PID: 772 (code=exited, status=1/FAILURE)
Nov 26 03:06:45 **** systemd[1]: Starting Mount ZFS filesystems...
Nov 26 03:06:45 **** zfs[772]: The ZFS modules are not loaded.
Nov 26 03:06:45 **** zfs[772]: Try running '/sbin/modprobe zfs' as root to load them.
Nov 26 03:06:45 **** systemd[1]: zfs-mount.service: Main process exited, code=exited, status=1/FAILURE
Nov 26 03:06:45 **** systemd[1]: Failed to start Mount ZFS filesystems.
Nov 26 03:06:45 **** systemd[1]: zfs-mount.service: Unit entered failed state.
Nov 26 03:06:45 **** systemd[1]: zfs-mount.service: Failed with result 'exit-code'.
● zfs-share.service - ZFS file system shares
Loaded: loaded (/lib/systemd/system/zfs-share.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2018-11-26 03:06:53 JST; 53min ago
Docs: man:zfs(8)
Process: 1611 ExecStart=/sbin/zfs share -a (code=exited, status=1/FAILURE)
Process: 1593 ExecStartPre=/bin/rm -f /etc/dfs/sharetab (code=exited, status=0/SUCCESS)
Main PID: 1611 (code=exited, status=1/FAILURE)
Nov 26 03:06:53 **** systemd[1]: Starting ZFS file system shares...
Nov 26 03:06:53 **** zfs[1611]: The ZFS modules are not loaded.
Nov 26 03:06:53 **** zfs[1611]: Try running '/sbin/modprobe zfs' as root to load them.
Nov 26 03:06:53 **** systemd[1]: zfs-share.service: Main process exited, code=exited, status=1/FAILURE
Nov 26 03:06:53 **** systemd[1]: Failed to start ZFS file system shares.
Nov 26 03:06:53 **** systemd[1]: zfs-share.service: Unit entered failed state.
Nov 26 03:06:53 **** systemd[1]: zfs-share.service: Failed with result 'exit-code'.
● zfs-zed.service - ZFS Event Daemon (zed)
Loaded: loaded (/lib/systemd/system/zfs-zed.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2018-11-26 03:06:46 JST; 54min ago
Docs: man:zed(8)
Process: 904 ExecStart=/usr/sbin/zed -F (code=exited, status=1/FAILURE)
Main PID: 904 (code=exited, status=1/FAILURE)
Nov 26 03:06:46 **** systemd[1]: Started ZFS Event Daemon (zed).
Nov 26 03:06:46 **** zed[904]: ZFS Event Daemon 0.6.5.9-5 (PID 904)
Nov 26 03:06:46 **** zed[904]: Failed to initialize libzfs
Nov 26 03:06:46 **** systemd[1]: zfs-zed.service: Main process exited, code=exited, status=1/FAILURE
Nov 26 03:06:46 **** systemd[1]: zfs-zed.service: Unit entered failed state.
Nov 26 03:06:46 **** systemd[1]: zfs-zed.service: Failed with result 'exit-code'.
zfs パーティションを作るには、まず最初に modprobe zfs でモジュールを読み込む。これをしておかないと、zpool コマンドなどに失敗する。
# modprobe zfs $ lsmod | grep zfs zfs 3739648 0 zunicode 331776 1 zfs zavl 16384 1 zfs icp 286720 1 zfs zcommon 77824 1 zfs znvpair 90112 2 zfs,zcommon spl 118784 4 zfs,icp,znvpair,zcommon # zpool list no pools available # zpool status no pools available
RAID-Z のストレージプールのマウント先を作る
# mkdir --parents /srv/disk02
zpool コマンドで RAID-Z のストレージプールを作る。マウントは自動的に行われる。管理も zfsd がやる。状態確認。
# zpool \
create \
-f \
-m /srv/disk02/ \
tank00 \
raidz \
ata-TOSHIBA_MD05ACA800_************ \
ata-TOSHIBA_MD05ACA800_************ \
ata-TOSHIBA_MD05ACA800_************ \
;
# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
tank00 21.8T 936K 21.7T - 0% 0% 1.00x ONLINE -
$ df \
--human-readable \
--all \
--print-type \
/srv/disk02/ \
;
Filesystem Type Size Used Avail Use% Mounted on
tank00 zfs 15T 128K 15T 1% /srv/disk02
# zpool status
pool: tank00
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
tank00 ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
ata-TOSHIBA_MD05ACA800_************ ONLINE 0 0 0
ata-TOSHIBA_MD05ACA800_************ ONLINE 0 0 0
ata-TOSHIBA_MD05ACA800_************ ONLINE 0 0 0
errors: No known data errors
再起動。
システムが degraded でないことを確認。
$ systemctl status -l | head
● ****
State: running
Jobs: 0 queued
Failed: 0 units
Since: Mon 2018-11-26 05:16:57 JST; 5min ago
CGroup: /
├─user.slice
│ ├─user-1001.slice
│ │ ├─session-2.scope
│ │ │ ├─2166 sshd: **** [priv]
# systemctl status zfs-mount.service zfs-share.service zfs-zed.service
● zfs-mount.service - Mount ZFS filesystems
Loaded: loaded (/lib/systemd/system/zfs-mount.service; enabled; vendor preset: enabled)
Active: active (exited) since Mon 2018-11-26 05:17:01 JST; 4min 3s ago
Docs: man:zfs(8)
Process: 1159 ExecStart=/sbin/zfs mount -a (code=exited, status=0/SUCCESS)
Main PID: 1159 (code=exited, status=0/SUCCESS)
Tasks: 0 (limit: 4915)
CGroup: /system.slice/zfs-mount.service
Nov 26 05:17:01 **** systemd[1]: Starting Mount ZFS filesystems...
Nov 26 05:17:01 **** systemd[1]: Started Mount ZFS filesystems.
● zfs-share.service - ZFS file system shares
Loaded: loaded (/lib/systemd/system/zfs-share.service; enabled; vendor preset: enabled)
Active: active (exited) since Mon 2018-11-26 05:17:12 JST; 3min 52s ago
Docs: man:zfs(8)
Process: 2046 ExecStart=/sbin/zfs share -a (code=exited, status=0/SUCCESS)
Process: 2029 ExecStartPre=/bin/rm -f /etc/dfs/sharetab (code=exited, status=0/SUCCESS)
Main PID: 2046 (code=exited, status=0/SUCCESS)
Tasks: 0 (limit: 4915)
CGroup: /system.slice/zfs-share.service
Nov 26 05:17:12 **** systemd[1]: Starting ZFS file system shares...
Nov 26 05:17:12 **** systemd[1]: Started ZFS file system shares.
● zfs-zed.service - ZFS Event Daemon (zed)
Loaded: loaded (/lib/systemd/system/zfs-zed.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2018-11-26 05:17:02 JST; 4min 1s ago
Docs: man:zed(8)
Main PID: 1300 (zed)
Tasks: 1 (limit: 4915)
CGroup: /system.slice/zfs-zed.service
└─1300 /usr/sbin/zed -F
Nov 26 05:17:02 **** systemd[1]: Started ZFS Event Daemon (zed).
Nov 26 05:17:02 **** zed[1300]: ZFS Event Daemon 0.6.5.9-5 (PID 1300)
Nov 26 05:17:02 **** zed[1300]: Processing events since eid=0
適当に書き込みテスト
# touch /srv/disk02/hoge # mkdir /srv/disk02/hogedir
ダウンタイムを短くするためにファイルコピーを1か月くらい毎日定期的に仕込む。
# rsync --archive /srv/disk01/share/ /srv/disk02/share/
移行日直前に rsync --archive --delete だがその前に dry-run で消されるファイルを確かめる。
# rsync --dry-run --verbose --archive --delete /srv/disk01/share/ /srv/disk02/share/ | less
データの移行。
# rsync --archive --delete /srv/disk01/share/ /srv/disk02/share/
移行後は /srv/disk01/ のマウント元を別ディレクトリにマウントするように設定変更、lock-up 状態 (read-only mount 状態) で 1 か月くらい保管。/srv/disk02 のマウント元を /srv/disk01 にマウント。
# vi /etc/fstab ## default,ro に変更 # zfs get mountpoint tank00 # zfs set mountpoint=/srv/disk00 tank00 # zfs get mountpoint tank00
再起動
一か月程度様子を見て問題なければ移行完了。