
storage area network - ZFS and SAN -- best practices? - Server Fault
That may mean using ZFS everywhere along with specialized Oracle storage devices (some people do that and have many disks exposed to ZFS without problem, and use Oracle tools to do management), it may mean using only enterprise SAN products, or it may mean using some hybrid (in which case you'll probably have to develop some tools and processes ...
ZFS and SAN: issue with data scrubbing - Server Fault
2022年5月4日 · Independent on the ZFS geometry, we have noticed that a high writing load (> 1 GB/s) during ZFS scrubs results in disk errors, leading eventually to faulted devices. By looking at the files presenting errors we could link this problem to the scrubbing process trying to access data still present in the cache of the SAN.
high availability - ZFS, SAN's, HA, Possible? - Server Fault
ZFS, SAN's, HA, Possible? Ask Question Asked 14 years, 6 months ago. Modified 14 years, 5 months ago ...
Highly available ZFS SAN/NAS with SMB and NFS - Server Fault
ZFS isn't a cluster-aware file system unfortunately which limits its use in an instantaneous fail-over server cluster. However if your platform can afford a few seconds outage/pause then perhaps something like NexentaStor can be configured to be fairly highly available for a reasonable cost - it's very simple to setup and manage too.
Handling XenServer snapshotting and cloning with ZFS SAN
2015年6月5日 · The SAN would be taking automatic (and efficient) periodic ZFS snapshots that could go back in time a while, and I'd love to be able to revert a VM to such ZFS snapshot. Would letting ZFS handle snapshotting/cloning instead of doing it through XenServer be advisable, and if so, what's the best way to go about it?
ZFS SAN boot - what makes the OS crash during SAN switch change
2015年8月31日 · I have a Solaris 10 server - with ZFS SAN boot - the OS pool has disks configured from two different HBAs c1 , c2 - have LUNs from different FA ports on the SAN array - c1 , c2 HBAs are connected to different fabric switches - but still if one of the cables is pulled(or fails) seems like the OS is hung - and needs a powercycle - what is it in ...
How does zfs raidz-2 recover from 3 drives down? - Server Fault
2020年1月22日 · Im wondering what happened, how ZFS was able to recover completely, or if my data is still truly in tact. When i came in last-night i saw this to my dismay, then confusion. zpool status pool: san
ZFS best practices with hardware RAID - Server Fault
Again - if your only desire to use ZFS is an improvement in data resiliency, and your chosen hardware platform requires a RAID card provide a single LUN to ZFS (or multiple LUN's, but you have ZFS stripe across them), then you're doing nothing to improve data resiliency and thus your choice of ZFS may not be appropriate.
zfs - Acquiring new servers to run ESXi and SAN/NAS - Server Fault
Here is our goal: Setup new servers to turn our entire physical computer network into 3 physical groups, which are: Server 1 - NAS - Openfiler/NexentaStor CE/FreeNAS/(Other Suggestions) Server ...
VMware ESXi and ZFS hardware & configuration recommendations
Building a poor mans ZFS SAN for ESXi will likely suit your needs just fine and give you some room to grow, especially if you stick with mirrored (not RAIDZ) pools. Once your IO needs exceed 2xGigE or you get a 2nd ESXi host, things get trickier (L3 switches, 10gigE, 4Gig FC, etc) but you'll cross that bridge when you come to it.