Show newer

@mgerdts AH-HAH... googling indicates that I need to use zdb vs zpool to get ashift values... and.. there we are. ashift of 12 on the new devices and 9 on the old. I think we have the smoking gun... once I was actually looking in the right place. Thanks!

@mgerdts no raidz at all, simple stripe/concat. 4x1T on machine A, 2x4T on machine B.

@mgerdts I did see that checksums are "on" on the source and "skein" on the destination. but on 4t of 8k pages, that's just 16g or 32g of additional space, total (depending on 256bits or 512bits of hashsize, and that's worsecase since that doesn't account for fletcher7 already being 128bits)

@mgerdts ok, so compression ratios are different, by about a factor of 2x.. which explains it. But why? I looked at ashift (zpool property) on both and they are zero on both. Both are zstd (which I additionally forced with a -o on zfs-receive, since ONE of the original ones was lz4.. but even if that was a degenerate compression case in converting lz4 to zstd, it doesn't explain nearly enough of the difference)

@javierk4jh My understanding from reading zfs-send and zfs-receive and online searches is that you actually cannot change recordsize that way as the stream is deltas itself.

That is if the incremental says to "set block 15 to 0xfeedface", then block 0xfeedface doesn't have the context of the rest of the block to fill in.

Granted this is a solvable problem to just read the original and write out the whole, but they opted to not have that complexity

I did check anyway, and recordsizes look good

@mgerdts I am looking at it via 'zfs list' and 'df' both show compatible information. The main difference seems to be in the refer (I am redoing the receive right now, so I am going from memory), it appears that the receive has multiple full copies.

And the zfs-receive seems to corroborate that by saying it has multiple 'full' streams ... maybe?. In ~7 more hours the receive will be finished

@mgerdts Not small files, average filesize is close to 1gig (this is a postgres database data filesystem), there are 2 recordsizes on it, a 'precopy' snapshot with 128K records, and then I set it to 8k to get better perfomance and copied everything over, I also set lz4 to zstd on the new copies. no raid on either, straight concat/stripe (underlying hardware does all of the redundancy)

ok and mastodon people. Answer me this, I have a ZFS filesystem on machine A, I zfs-send it to machine B (zfs send -R) (zfs receive -Fv). machine A has 4T of space *total*. machine B has 8T of space. when done, machine B has only 250g of free space available), the filesystem is almost _twice_ as large?

Roses are red.
Roses are blue.
Depending on their velocity
relative to you.

@toran @lattera @kev The BSDs are a friendly and supportive community, you'll enjoy the fun then stay for the stability ☺️

Rebranding Linode as “Akamai Connected Cloud” is just replacing words with dumber words until and unless Akamai replaces Linode’s API with a surly hungover 22-year-old named Jason making changes via ServiceNow helpdesk tickets.

@skulegirl XCode is like the most developer hostile IDE I have ever experienced. It will randomly decide that it will upgrade itself and your build tools will fail because you now need to reinstall a bunch of its components.

@burgerbecky did you test for C19? This sounds a LOT like when I had C19.. that cough especially.. it just won't go away. I had to test _3_ times every other day before the 3rd test, after 6 days of symptoms, finally tested positive

Show older
Cross Family's Mastodon

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!