Follow

ok and mastodon people. Answer me this, I have a ZFS filesystem on machine A, I zfs-send it to machine B (zfs send -R) (zfs receive -Fv). machine A has 4T of space *total*. machine B has 8T of space. when done, machine B has only 250g of free space available), the filesystem is almost _twice_ as large?

· · Web · 5 · 4 · 0

@david my first guess is that you have a lot of small files and something is causing zfs to insert a lot of padding.

Is ashift the same on both pools (zpool get ashift, I think)? My guess is the source may be 9 (512 byte minimum block size) and the destination is 12 (4k min block).

Is the source not raidz and destination is raidz?

How are you looking at total space? zpool and zfs commands look at different things?

@mgerdts Not small files, average filesize is close to 1gig (this is a postgres database data filesystem), there are 2 recordsizes on it, a 'precopy' snapshot with 128K records, and then I set it to 8k to get better perfomance and copied everything over, I also set lz4 to zstd on the new copies. no raid on either, straight concat/stripe (underlying hardware does all of the redundancy)

@mgerdts I am looking at it via 'zfs list' and 'df' both show compatible information. The main difference seems to be in the refer (I am redoing the receive right now, so I am going from memory), it appears that the receive has multiple full copies.

And the zfs-receive seems to corroborate that by saying it has multiple 'full' streams ... maybe?. In ~7 more hours the receive will be finished

@david maybe the copies or compressratio properties on each dataset will offer clues.

@mgerdts ok, so compression ratios are different, by about a factor of 2x.. which explains it. But why? I looked at ashift (zpool property) on both and they are zero on both. Both are zstd (which I additionally forced with a -o on zfs-receive, since ONE of the original ones was lz4.. but even if that was a degenerate compression case in converting lz4 to zstd, it doesn't explain nearly enough of the difference)

@mgerdts I did see that checksums are "on" on the source and "skein" on the destination. but on 4t of 8k pages, that's just 16g or 32g of additional space, total (depending on 256bits or 512bits of hashsize, and that's worsecase since that doesn't account for fletcher7 already being 128bits)

@david I'm not sure what to make of ashift=0: that's surely not the real value of ashift. Based on openzfs.github.io/openzfs-docs saying that ashift can be changed, there have been changes in this area since I last used zfs a lot.

If ashift is the same between the two pools, that points us back to the question of whether you are using raidz or draid and if so, do both pools have the same number of disks per raidz vdev?

@mgerdts no raidz at all, simple stripe/concat. 4x1T on machine A, 2x4T on machine B.

@mgerdts AH-HAH... googling indicates that I need to use zdb vs zpool to get ashift values... and.. there we are. ashift of 12 on the new devices and 9 on the old. I think we have the smoking gun... once I was actually looking in the right place. Thanks!

@mgerdts Now all I have to do is kill the pool... and restore.. again.. the ... 5th? time is the charm?

@david 8k recordsize + compression could lead to a poor interaction with ashift=12 as well. Suppose an 8k block would compress to 4200 bytes. With ashift=12, the compressed 8k block will consume 2 x 4k sectors (8k total). With ashift=9, the compressed 8k block will consume 9 x 512b sectors (4.5k total).

With raidz the overhead varies by number of drives in a raidz vdev. See my explanation here:

github.com/openzfs/zfs/blob/ma

@david re-reading that comment I see that it was updated with draid information, which was integrated after I added this comment with a fix. So things are more complicated with raidz *and* draid. Glad to see the comment update wasn't missed!

@david while we have concluded raidz is not to blame here, I figured it may be worth mentioning that I did a talk on this work while at #Joyent.

Slides: us-east.manta.joyent.com/Joyen
Video: youtu.be/sTvVIF5v2dw

Contrary to what I predicted back then, today’s NVMe SSDs pretty much all present as 512n, not as 4Kn.

#zfs

@mgerdts @david

(probably not because it doesn't show up in usage like zfs list -o space, only at the zpool level, but:)

Is there any chance the source has (or had) dedup and that's not been carried across?

@david @mgerdts

ashift and less-optimal compression packing, yeah. Surprised it was that much but still all too plausible

@uep @mgerdts I think the key here is that it is a postgres database store, so the recordsize is 8k to align with postgres pagesize, and with ashift of 4k that means BEST case possible compression is 2x, and anything less than 2x is 1x; that means realized compression has to be in the 2.0 to 1.0 range, whereas on the original I was in the 3.x to 4.x range. Math checks out.

@david @mgerdts Yeah, the lower bound on useful compression is a common issue, the upper bound in this case is less obvious but nasty

@david Are the dedup settings the same for both pools?

@david is the record size the same? If your source uses larger than 128K records and you didn't use -L it may be using 128K records on the target (they incur larger overhead than larger records do). I saw this when moving a zvol with 1M records to one with 128K records.

@javierk4jh My understanding from reading zfs-send and zfs-receive and online searches is that you actually cannot change recordsize that way as the stream is deltas itself.

That is if the incremental says to "set block 15 to 0xfeedface", then block 0xfeedface doesn't have the context of the rest of the block to fill in.

Granted this is a solvable problem to just read the original and write out the whole, but they opted to not have that complexity

I did check anyway, and recordsizes look good

@david what about snapshots? Can you check them? `zfs list -all`, maybe they are taking that much space :)

Sign in to participate in the conversation
Cross Family's Mastodon

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!