
Expanding the capacity of ZFS pool drives : r/zfs - Reddit
2020年10月14日 · Er, I meant pool, not vdev. My apologies. IMO, ZFS' send and recv tools are one of its best features. No, it's not really any easier on the disks -- you're still touching all of that data -- but migrating it is a breeze with ZFS. you can even send an entire filesystem to another machine (with ZFS installed) via SSH.
Easiest way to clone/migrate a pool : r/zfs - Reddit
2021年2月19日 · I would understand for anything in the output of zpool. But zfs is the layer on top of the pool. Pool layout should not matter for that. I now did: zfs list -t all -r -o name,used newpool/plvl5i0 | sed -e 's#newpool/##' |grep -v syncoid >newpool.txt zfs list -t all -r -o name,used plvl5i0 |grep -v syncoid >oldpool.txt diff -u oldpool.txt ...
A very short guide into how Proxmox uses ZFS : r/Proxmox - Reddit
2020年11月7日 · NOTE: In some cases, i.e. when the drives were plugged onto another system, you need to manually help ZFS via the "zfs import" command. Each pool has a name. Here: "rpool". Example: Here is a zpool list -v of a pool named "rpool" containing 2 striped disks with a total size 79G NAME SIZE ALLOC FREE ...
ZFS: Fast pool with SSD/HDD hybrid? : r/zfs - Reddit
2023年10月14日 · ZFS caching is super intelligent, but nothing beats the knowledge that the actual used software and, more importantly, you yourself have about the data. Using SSDs as supplementary devices to a HDD pool, while definitely a bit …
Creating a SMB share with my ZFS pool : r/zfs - Reddit
2024年5月3日 · ive tried using the built-in smbshare=on that's included with zfs and at least in the proxmox implementation it sucks and just seems easier to use samba itself and zfs itself. i had thought maybe using smb-on-zfs would reduce complexity but for me at least that wasnt the case. ZFS is great but it cant and shouldnt be expected to be and do ...
Backing Up the Whole Pool : r/zfs - Reddit
2024年4月1日 · ZFS is awesome. Combining checksumming and parity/redundancy is awesome. But there are still lots of potential ways for your data to die, and you still need to back up your pool. Period. PERIOD! So, is it really necessary to backup the pool, not just data? Backing up the data is fairly simple: zfs send receive. (Or even cp/rsync.)
New ZFS Pool Recommendations : r/zfs - Reddit
2019年4月1日 · The first pool is for bulk storage using 10X 4TB HDDs and second pool for fast access VMs using 6X 500GB SSDs. Thinking of something like this: HDD pool (RAIDZ2): zpool create -o ashift=12 -o autoexpand=on hddpool raidz2 \ (list of HDDs /dev/disk/by-id) zfs set compression=lz4 recordsize=1M xattr=sa dnodesize=auto hddpool. SSD pool (RAID1+0):
How do I move /home/user to a zfs pool? - Ask Ubuntu
2015年10月8日 · Such partition is that one just created (NOT formatted) that will host ZFS pool dedicated to /home directory. In my case, second partition name was ata-VBOX_HARDDISK_VB49b2d698-41fa84b3-part2 You will see such name in the pool creation sintax here below. Now we create the zhome pool using the partition 2 NOT formatted we just created. By terminal
PSA: HOWTO properly expand a RAIDZ1 / RAIDZ2 zfs pool - Reddit
2022年2月20日 · mismatched replication level: pool uses 3-way raidz and new vdev uses 4-way raidz I'm not talking about balancing the data allocated between vdevs. If you want a sane configuration with somewhat predictable I/O and failure recovery: you match vdev sizes, drive sizes, and disk speeds within the same ZFS pool.
ZFS bpool is almost full; how can I free up space so I can keep ...
After this operation, 0 B of additional disk space will be used. Do you want to continue? [Y/n] Requesting to save current system state ERROR couldn't save system state: Minimum free space to take a snapshot and preserve ZFS performance is 20%. Free space on pool "bpool" is 10%. Please remove some states manually to free up space.