The Btrfs Blues
Aug 3, 2025
A Btrfs bug that bites is in the wild, and we discover whole home audio that works like a charm.
Sponsored By:
- Managed Nebula: Meet Managed Nebula from Defined Networking. A decentralized VPN built on the open-source Nebula platform that we love.
- 1Password Extended Access Management: 1Password Extended Access Management is a device trust solution for companies with Okta, and they ensure that if a device isn't trusted and secure, it can't log into your cloud apps.
- Unraid: A powerful, easy operating system for servers and storage. Maximize your hardware with unmatched flexibility.
Links:
- 💥 Gets Sats Quick and Easy with Strike
- 📻 LINUX Unplugged on Fountain.FM
- How to recover from BTRFS errors | Support | SUSE
- btrfs-zero-log(8) — btrfs-progs — Debian Manpages — This command will clear the filesystem log tree. This may fix a specific set of problem when the filesystem mount fails due to the log replay.
- 2018 Patch
- Git: Btrfs: fix warning when replaying log after fsync of a tmpfile
- Git: btrfs: fix fsync of files with no hard links not persisting deletion
- problematic patch pulled into 6.16 on May 26th
- problematic patch pulled into 6.15.3 on June 19th
- Increased reports since 6.15.3 of corruption within the log tree - Peter Jung
- Null deref during attempted replay of corrupt TREE_LOG in newer kernel - Russell Haley
- System failed to boot – Btrfs log tree error / System Administration / Arch Linux Forums
- PATCH: btrfs: fix log tree replay failure due to file with 0 links and extents — When attempting to mount the fs, the log replay will fail
- patch on for-next branch of btrfs tree
- generic: test fsync of file with 0 links and extents
- Music Assistant — Music Assistant is a music library manager for your offline and online music sources which can easily stream your favourite music to a wide range of supported players and be combined with the power of Home Assistant!
- Music Assistant Installation Instructions
- Installation Instructions
- Music Assistant Music Providers
- Music Assistant Player Providers
- Home Assistant Plugin
- Home Assistant Voice Preview Edition - Home Assistant
- SYMFONISK Sonos WiFi bookshelf speaker, black smart/gen 2 - IKEA US
- HomePod - Apple
- Belkin SoundForm Connect AirPlay 2 Adapter & Airplay 2 Receiver
- WiiM Mini AirPlay 2 Wireless Audio Streamer
- Google Chromecast - Streaming Device with HDMI Cable
- Google Chromecast Audio Media Streamer - *** 2 PACK *** | eBay
- droans/mass_queue — Actions to control player queues for Music Assistant
- punxaphil/maxi-media-player — Media card for Home Assistant UI with a focus on managing multiple media players, but not excluding single player setups.
- NinDTendo/homeassistant_gradual_volume_control — Home Assistant integration providing a service to gradually change the volume of media_players over a given timespan.
- Chawan: TUI web browser — A text-mode web browser and pager for Unix-like systems, with a focus on implementing modern web standards while remaining self-contained, easy to understand and extensible.
- SilverBullet — SilverBullet is a tool to develop, organize, and structure your personal knowledge and to make it universally accessible across all your devices.
- HeliumOS — An atomic desktop operating system for your devices.
- LINUX Unplugged 620 - Brent Loves Building Things — Off-the-shelf didn’t cut it, so we built what we needed using open hardware and open source.
- rust-motd — Beautiful, useful, configurable MOTD generation with zero runtime dependencies
- rustdress — Self hosted Lightning Address Server
- rustdress: init at 0.5.2 by jordan-bravo
- rustdress in nixpkgs
- Plausible Slop: Generative AI and Open Source Cybersecurity
- Plausible Slop Timecode Link
- Death by a thousand slops | daniel.haxx.se
- Pick: PlexRipper — A cross-platform Plex media downloader that seamlessly adds media from other Plex servers to your own!
- PlexRipper Docs
- Pick: kde-control-station — A modern configuration center for KDE plasma based on the awesome kde_controlcentre by Prayag2
- Pick: kAirPods — Native AirPods integration for KDE Plasma 6 with real-time battery monitoring, noise control, and panel widget
- AmilieCoding/gnomePods — Native AirPods integration for GNOME with real-time battery monitoring, noise control, and panel widget. Built off of kAirPods
Transcript
WEBVTT
00:00:11.489 --> 00:00:16.029
Hello, friends, and welcome back to your weekly Linux talk show. My name is Chris.
00:00:16.189 --> 00:00:16.789
My name is Wes.
00:00:17.009 --> 00:00:17.809
And my name is Brent.
00:00:18.429 --> 00:00:23.009
Hello, gentlemen. Well, coming up on episode 626 of your Unplugged program,
00:00:23.009 --> 00:00:27.569
we're digging into a Butterfest bug that is biting people in the wild right now.
00:00:27.729 --> 00:00:30.149
So we want to get the word out there as fast as possible.
00:00:30.529 --> 00:00:36.089
Then, over the week, I discovered how to actually get whole home audio that's working.
00:00:36.449 --> 00:00:40.489
And it's working like a charm. And there's a little extra twist in there,
00:00:40.549 --> 00:00:41.769
too. So I'll tell you about that.
00:00:41.849 --> 00:00:45.249
Then we'll round the show out with some great boos, killer picks, and a lot more.
00:00:45.449 --> 00:00:50.909
So before we get into all of that, let's say time of props to our virtual lug. Hello, Mumble Room.
00:00:51.869 --> 00:00:55.049
Hey, Chris, here we're St. Hello, Brent. Hello, gang.
00:00:55.289 --> 00:00:58.149
Hello, everybody in the Mumble Room. Thank you very much for joining us over
00:00:58.149 --> 00:01:02.029
there. And shout out to everybody who's joining us live and shout out to all
00:01:02.029 --> 00:01:03.269
the members that make it possible.
00:01:04.389 --> 00:01:09.649
And go check out Managed Nebula at define.net slash unplugged.
00:01:10.429 --> 00:01:14.349
Decentralized VPN built on open source software that we love,
00:01:14.709 --> 00:01:17.769
the Nebula platform, which you've been playing around with, Wes.
00:01:17.949 --> 00:01:18.409
I have.
00:01:18.729 --> 00:01:21.529
Yeah, you're getting me really jealous. We'll talk about that more at some other
00:01:21.529 --> 00:01:26.609
point. But Nebula is optimized for speed, simplicity, and it has industry-leading security.
00:01:26.789 --> 00:01:32.289
And the way this manifests for day-to-day life, less battery usage on your mobile
00:01:32.289 --> 00:01:36.569
devices, less network traffic, and less load on your servers.
00:01:36.949 --> 00:01:42.289
It's really light it's really light and unlike traditional vpns nebula's decentralized design,
00:01:42.849 --> 00:01:45.789
means that you can build out your own network as you
00:01:45.789 --> 00:01:50.409
like and make it as resilient as you need so great for a home lab great for
00:01:50.409 --> 00:01:53.989
a global enterprise whether you're using their managed product or the self-hosted
00:01:53.989 --> 00:01:58.269
option which chef's kiss is top-notch one of the reasons why we love it so much
00:01:58.269 --> 00:02:02.109
they don't hold back on the self-hosted product that's what they build on top
00:02:02.109 --> 00:02:03.829
of and That's what's available for all of us.
00:02:04.309 --> 00:02:07.649
And they're starting to roll out desktop clients.
00:02:07.809 --> 00:02:12.269
It's going to get even easier to use Nebula, which I'm really excited about.
00:02:12.429 --> 00:02:15.349
Because anything that just opens us up to more users is going to be fantastic.
00:02:15.749 --> 00:02:19.469
And Nebula takes advantage of things like the noise protocol framework for key
00:02:19.469 --> 00:02:22.849
exchange and symmetric encryption. So you know they're using good stuff in there.
00:02:23.129 --> 00:02:26.189
And whether you want to self-host the entire infrastructure or you want to check
00:02:26.189 --> 00:02:28.909
out their managed product, which makes it really straightforward,
00:02:29.729 --> 00:02:33.949
you got an option. In fact, if you go to define.net slash unplugged, you support the show.
00:02:34.529 --> 00:02:38.569
You can also get started with 100 hosts absolutely free. No credit card required.
00:02:38.809 --> 00:02:42.089
Go see why we're really excited about Nebula by playing around with it first
00:02:42.089 --> 00:02:44.469
at define.net slash unplugged.
00:02:46.926 --> 00:02:50.586
We've got a little bit of exciting housekeeping. This is approaching really fast.
00:02:51.006 --> 00:02:54.966
Yes, I'll be giving a talk at Nix Vegas at DEF CON this year.
00:02:55.106 --> 00:02:58.366
So if you'll be going to, you can find me there.
00:02:58.526 --> 00:03:02.586
Come check it out. We'll be talking about mesh sidecars for NixOS services.
00:03:03.086 --> 00:03:06.466
And then an attempt to find other people in the chaos, if anyone's interested,
00:03:06.706 --> 00:03:11.426
I think I'll try to post up at the Casbar Lounge on Saturday around 6.30 for
00:03:11.426 --> 00:03:13.866
an hour or so, just for a sort of office hours.
00:03:14.166 --> 00:03:18.666
Nice, simple meetup right there. no meetup page required just where is it?
00:03:19.066 --> 00:03:22.426
The Caspar Lounge in the Sahara Hotel at what time? 6.30.
00:03:22.426 --> 00:03:24.626
On Saturday so if you're in the area or.
00:03:24.626 --> 00:03:26.026
You can I don't know ping me on Matrix.
00:03:26.026 --> 00:03:28.386
You don't even have to be at DEFCON if you're just in the Vegas area that's
00:03:28.386 --> 00:03:32.446
true too come say hi to West Payne absolutely Brent and I can attest he's a
00:03:32.446 --> 00:03:35.126
delight to hang out with at a bar or at dinner,
00:03:35.826 --> 00:03:40.106
wherever it is so do recommend you go check it out and where do they go for
00:03:40.106 --> 00:03:44.906
details is it nix.vegas is that do they have a website that's right yeah nix.vegas
00:03:44.906 --> 00:03:47.066
slash schedule There's a dot Vegas?
00:03:47.126 --> 00:03:48.246
Of course there's a dot Vegas.
00:03:50.958 --> 00:03:54.558
So we wanted to talk about a file system corruption issue that is affecting
00:03:54.558 --> 00:04:01.978
ButterFS users on more recent kernels like 6.16 and 6.15.3.
00:04:02.478 --> 00:04:06.938
And we are now at a point with ButterFS adoption where there's enough leading-edge
00:04:06.938 --> 00:04:11.058
kernels and users out there and distributions that are actually deploying and
00:04:11.058 --> 00:04:14.018
testing these things kind of right as they ship.
00:04:14.238 --> 00:04:17.498
And thanks to them, we've discovered there's a bit of a problem going on that
00:04:17.498 --> 00:04:19.038
has left some systems unbootable.
00:04:19.038 --> 00:04:22.418
Yeah, it turns out there's a lot of root file systems out there now,
00:04:22.418 --> 00:04:27.258
too, which is a great thing. We know how much we like it, except if,
00:04:27.318 --> 00:04:30.138
you know, your root file system won't mount.
00:04:30.278 --> 00:04:30.918
That's not good.
00:04:31.198 --> 00:04:31.358
No.
00:04:31.738 --> 00:04:35.658
Yeah, so the bug itself didn't do any damage to the data, but it did prevent
00:04:35.658 --> 00:04:37.898
the system from properly mounting the root device.
00:04:38.078 --> 00:04:42.278
Yeah. Okay, so copy-on-write file system. We're also aware, generally,
00:04:42.458 --> 00:04:44.738
right, like ext4 is a journaling file system.
00:04:44.738 --> 00:04:50.058
And these journals are something like what's known as a write-ahead log in the
00:04:50.058 --> 00:04:55.358
database world or ZFS has the intent log which is somewhat different but very similar also.
00:04:56.158 --> 00:05:02.478
The idea is for data consistency and crash consistency you can write the things
00:05:02.478 --> 00:05:06.338
that you're doing especially because with copy and write let's say you're updating
00:05:06.338 --> 00:05:09.418
our show notes a markdown file well what happens?
00:05:09.418 --> 00:05:14.138
You make a new copy of that so you can do snapshotting so you have all the features we love
00:05:14.538 --> 00:05:18.218
but there's also this tree structure and you have to kind of go update all the
00:05:18.218 --> 00:05:21.178
tree to make sure that like all the way at the root when you go to like you
00:05:21.178 --> 00:05:25.098
know ls that directory it actually points at that updated copy and not the old copy,
00:05:25.658 --> 00:05:29.078
but the snapshot version has it so there's a bunch of bookkeeping and updates
00:05:29.078 --> 00:05:30.478
you need to make just for that write,
00:05:31.318 --> 00:05:34.798
and what happens if you crash in the middle so the idea is you make a note that
00:05:34.798 --> 00:05:36.338
you're like I'm going to do this update,
00:05:36.838 --> 00:05:42.238
and then you flush that to disk and then you can move that out of the log but
00:05:42.238 --> 00:05:46.358
if you crash in the middle you can see from the log oh i hadn't finished that
00:05:46.358 --> 00:05:48.078
and then you can kind of check to see like do i need,
00:05:49.175 --> 00:05:52.675
Can I fix it? Can I just replay that? Most of the time, if you have an unclean
00:05:52.675 --> 00:05:56.015
shutdown, it just automatically replays that journal from the log,
00:05:56.715 --> 00:06:00.515
brings your file system back to a clean state. You don't even really need to know about it.
00:06:00.655 --> 00:06:04.635
So a handful of users, like on CacheOS and Fedora, where they're getting pretty
00:06:04.635 --> 00:06:08.535
current kernels and they're using ButterFS on the root, they experienced some
00:06:08.535 --> 00:06:11.895
sort of crash, and then when they reset their system, they couldn't boot.
00:06:12.035 --> 00:06:15.395
Yep, you just get an error that it can't replay the log, which means it sees
00:06:15.395 --> 00:06:16.715
that there is data there.
00:06:16.835 --> 00:06:19.875
For some reason, your file system, whether it was like a total shutdown,
00:06:20.075 --> 00:06:24.275
forced shutdown, or maybe something just happened on the fly before as it was
00:06:24.275 --> 00:06:27.135
shutting down, you have stuff in that replay log.
00:06:27.455 --> 00:06:31.615
And so, of course, it doesn't want to just drop that because you could have,
00:06:31.795 --> 00:06:34.915
basically, the amount of data loss you could have if you didn't replay that
00:06:34.915 --> 00:06:38.155
is roughly constrained by how
00:06:38.155 --> 00:06:41.515
often you're doing these background commits sort of flushing to the disk.
00:06:41.635 --> 00:06:44.575
Usually it's like 30 seconds, maybe it could be a couple of minutes if you have
00:06:44.575 --> 00:06:47.975
some configuration or doing tons of fsyncs or lots of disk I.O. or something.
00:06:49.215 --> 00:06:52.955
And so what the fix ends up being in terms of just like I want to get my stuff
00:06:52.955 --> 00:06:59.155
going again is you run butterfs rescue which is a whole sub command for rescue commands,
00:06:59.875 --> 00:07:05.355
and then 0-log and that pretty much does what it says clear the tree log and.
00:07:05.355 --> 00:07:08.355
You essentially do that from a live environment because if you were to say to
00:07:08.355 --> 00:07:11.955
boot into a live environment you wouldn't be able to just mount that file system because of this.
00:07:11.955 --> 00:07:14.895
Problem no anytime you mount it it's just going to complain with the same thing
00:07:14.895 --> 00:07:17.815
I think there is also a mount option you can do to say like skip replay.
00:07:19.095 --> 00:07:24.755
And so now this is sort of splitting hairs, but this is technically not a ButterFS
00:07:24.755 --> 00:07:30.095
bug so much as it is something kind of related to a series of other patches
00:07:30.095 --> 00:07:32.395
that kind of led to this issue. Am I following this right?
00:07:32.595 --> 00:07:34.035
No, so it is ButterFS.
00:07:34.275 --> 00:07:34.415
Okay.
00:07:34.495 --> 00:07:36.715
It's just, it kind of has an interesting history.
00:07:37.035 --> 00:07:40.615
Yeah, okay. That's what I was trying to follow, and that's a piece I wasn't getting. Yeah.
00:07:40.835 --> 00:07:46.695
So it all goes back to 2018, actually. A commit called fix warning when replaying
00:07:46.695 --> 00:07:49.255
log after fsync of a temp file.
00:07:49.575 --> 00:07:52.235
And the thing with temp files is they're, they generally like they're meant
00:07:52.235 --> 00:07:57.135
to be discarded at reboot, right? And so they don't actually like have any extents on disks.
00:07:57.275 --> 00:08:00.255
They just sort of have like the inode for tracking and they can live in memory
00:08:00.255 --> 00:08:02.755
and the cache and all that, but they don't actually make it to the disk.
00:08:02.755 --> 00:08:04.935
Right. And so they were running into some warnings.
00:08:05.135 --> 00:08:09.895
So they went to go patch this up. And to do that, they made essentially two changes.
00:08:10.875 --> 00:08:13.995
One, because you basically have two sides of this. When you're going to do an
00:08:13.995 --> 00:08:17.615
operation, you write into the log to say, like, I'm about to do X,
00:08:17.715 --> 00:08:19.875
right? I'm about to delete this temp file.
00:08:20.706 --> 00:08:23.466
So before I take the action, I log that I'm going to take the action.
00:08:23.586 --> 00:08:27.666
Exactly. And then you have the state where you're booting up and you're mounting
00:08:27.666 --> 00:08:30.606
the file system and you're replaying that log if there's stuff in there that
00:08:30.606 --> 00:08:32.346
hasn't been synced with the disk yet.
00:08:33.386 --> 00:08:35.746
So in this case, what they did was they said, all right, well,
00:08:35.746 --> 00:08:38.526
we don't really care about these temp files in the log anyway,
00:08:38.526 --> 00:08:41.886
so we'll stop adding new stuff to the log going forward.
00:08:43.006 --> 00:08:47.166
And when we're in replay mode, we'll just skip these because they're going to
00:08:47.166 --> 00:08:50.366
end up getting deleted anyway. We don't care about them. the problem,
00:08:50.606 --> 00:08:52.686
which was not apparent at the time, because basically...
00:08:52.686 --> 00:08:53.246
Back in 2018.
00:08:53.486 --> 00:08:54.126
Back in 2018.
00:08:54.406 --> 00:08:55.086
When they're writing this patch.
00:08:55.226 --> 00:08:59.006
There's multiple different stages you can be in replay. And so there's one called
00:08:59.006 --> 00:09:02.826
replay inodes, and that's where they sort of did this skipping.
00:09:03.066 --> 00:09:05.126
But there's another one called replay all.
00:09:05.406 --> 00:09:09.986
And then it turns out that the way this change was made, it didn't apply during that stage.
00:09:11.206 --> 00:09:14.106
But because they were also making the change at the same time where they were
00:09:14.106 --> 00:09:17.226
just not going to add that stuff to the log anymore, the only way you could
00:09:17.226 --> 00:09:21.686
trigger it back then was if you were somehow, like you had upgraded your kernel
00:09:21.686 --> 00:09:25.506
and had an unclean, or you were mounting something unclean shutdown from an older kernel.
00:09:25.706 --> 00:09:29.806
Okay. So this is the bit. So then later on, there was like an additional patch
00:09:29.806 --> 00:09:30.946
that compounded this problem?
00:09:31.086 --> 00:09:36.406
Yeah. So just in May, it started and got picked up and added to 6.15.3 and 6.16.
00:09:36.726 --> 00:09:38.586
Uh-huh. So that's why it's in the most recent kernels.
00:09:38.766 --> 00:09:42.226
Yeah. So then we noticed a problem that it turns out by not,
00:09:43.206 --> 00:09:46.806
just by no longer putting this, like these temp file stuff in the log at all,
00:09:46.806 --> 00:09:50.246
it meant that we could have a problem where we actually
00:09:50.246 --> 00:09:52.966
left them undeleted like they were still kind of hanging
00:09:52.966 --> 00:09:56.026
around in like when you did have an unclean shutdown
00:09:56.026 --> 00:09:58.826
the accounting kind of was broken the
00:09:58.826 --> 00:10:02.826
other way where instead of like losing stuff it would keep stuff you were trying
00:10:02.826 --> 00:10:07.346
to delete around okay so not a huge issue but it's like incorrect behavior according
00:10:07.346 --> 00:10:11.826
to how do you expect the file kind of crufty over time yeah uh exactly if we
00:10:11.826 --> 00:10:16.746
f-sync a file that has no more hard links because while a process had a file descriptor open on it,
00:10:16.886 --> 00:10:17.966
the file's last hard link was
00:10:17.966 --> 00:10:20.866
removed, and then the process did an fsync against the file descriptor,
00:10:21.066 --> 00:10:25.466
after a power failure or crash, the file still exists after replaying the log.
00:10:25.586 --> 00:10:25.726
Okay.
00:10:26.046 --> 00:10:29.546
Right? So they'll fix this by not ignoring inodes with zero hard links.
00:10:29.906 --> 00:10:33.706
So now we're putting that stuff back into the log again.
00:10:33.906 --> 00:10:34.106
Okay.
00:10:34.426 --> 00:10:38.686
And that means suddenly this issue, which had actually technically sort of been
00:10:38.686 --> 00:10:41.046
there since 2018, can now be hit again.
00:10:41.907 --> 00:10:44.747
Fairly easily it turns out i mean you still kind of need something to happen
00:10:44.747 --> 00:10:48.207
weird with the file system where you have stuff to replay it has to be triggered.
00:10:48.207 --> 00:10:49.087
By some kind of event.
00:10:49.087 --> 00:10:49.807
Yeah it.
00:10:49.807 --> 00:10:52.707
Doesn't just happen if you've got butterfs and it's running and you have 616.
00:10:52.707 --> 00:10:56.387
You basically need to be in a state where it finds stuff in that replay journal
00:10:56.387 --> 00:10:59.047
when it's booting up and mounting the file system my.
00:10:59.047 --> 00:11:03.467
System upstairs by the way is on linux 616 and it is butterfs on root so that's
00:11:03.467 --> 00:11:06.767
why i was curious if yeah that's it has to be i'd have to crash or something.
00:11:06.767 --> 00:11:09.607
Yeah crash or yeah maybe i don't know Some application dumps.
00:11:09.607 --> 00:11:13.807
Yeah, kernel has a problem and it forces the file system offline without doing it correctly.
00:11:13.907 --> 00:11:15.547
Out of memory kills something. I mean, you never know.
00:11:15.867 --> 00:11:19.407
So it gets like, this gets thrown out there sometime in May in the Linux ButterFS
00:11:19.407 --> 00:11:25.027
lists and then eventually gets pulled in for 6.16. And then once that kind of
00:11:25.027 --> 00:11:28.847
gets, was pushing along, it also gets backported to 6.15.3.
00:11:29.067 --> 00:11:34.227
And you can see it's just this one commit from there that like as 6.15.3 started
00:11:34.227 --> 00:11:37.147
rolling out pretty much like by a week later or so,
00:11:37.147 --> 00:11:41.667
kind of the end of june 26 27th cache eos users were some of the first folks
00:11:41.667 --> 00:11:46.787
who really started running into these things and then by like july 7th or so
00:11:46.787 --> 00:11:51.407
uh a kernel mailing list thread gets going and people start trying to it's a
00:11:51.407 --> 00:11:54.627
good collective effort the cache folks arch folks they're all kind of like.
00:11:54.627 --> 00:11:56.147
And butterfest developers.
00:11:56.147 --> 00:11:59.247
Yeah they're reaching out to the butterfest devs who are kind of like well we
00:11:59.247 --> 00:12:03.887
really need d message output or like more than just you know sort of like the the one error they.
00:12:03.887 --> 00:12:07.087
Had to work out like a standard operating procedure to get these systems and
00:12:07.087 --> 00:12:08.927
these file systems back up so they could pull logs.
00:12:09.107 --> 00:12:12.127
Like, they had to come up with procedures and processes they could communicate to people.
00:12:12.227 --> 00:12:14.947
And then kind of by proxy, right, where, like, the distro folks are talking
00:12:14.947 --> 00:12:17.507
with the kernel folks and getting what the kernel folks needs and then reaching
00:12:17.507 --> 00:12:20.287
out to the users they're doing end-user support for to, like,
00:12:20.447 --> 00:12:22.647
tell them how to get this data and shepherd it around.
00:12:23.267 --> 00:12:28.267
But eventually enough stuff throughout July kind of comes together that there's
00:12:28.267 --> 00:12:33.527
now a fix out there that correctly does this skipping and replay of these temp
00:12:33.527 --> 00:12:36.527
files in all the stages that are actually necessary.
00:12:37.007 --> 00:12:39.647
Okay, so I basically just need to keep an eye out for a kernel update.
00:12:39.927 --> 00:12:46.207
Yeah, but that got proposed to the Linux ButterFS list, so presumably that will get...
00:12:47.021 --> 00:12:51.861
Pulled into 6.17 and then presumably backported as well, but that's all going to take time.
00:12:51.921 --> 00:12:55.601
And in the meantime, if this were to happen to you, you could go into a live
00:12:55.601 --> 00:12:58.741
environment, run that rescue command, and you'd be alright.
00:12:58.921 --> 00:13:01.801
Yeah. The only data loss you should be risking is, like we were saying,
00:13:02.021 --> 00:13:07.121
is just whatever had been sort of not flushed to disk yet as the unclean shutdown was happening.
00:13:07.601 --> 00:13:10.981
Do we have a sense of how many users have been affected up to now?
00:13:11.141 --> 00:13:15.081
And I would imagine because of the fixes we're not going to see too many more
00:13:15.081 --> 00:13:16.301
users being affected as well?
00:13:16.401 --> 00:13:18.241
My sense is it's less than a thousand but i don't know.
00:13:18.241 --> 00:13:21.401
Yeah i don't know because it's kind of like you have to have some event that
00:13:21.401 --> 00:13:28.101
triggers it and you have to be on 6 16 or 6 6 15 3 or newer and.
00:13:28.101 --> 00:13:30.001
With butterfest's root yeah yeah.
00:13:30.001 --> 00:13:34.281
I mean it could be on a not it could affect a non-root file system just it wouldn't break your boot you.
00:13:34.281 --> 00:13:37.301
Know who i think it's happening to the most is users that game and then the
00:13:37.301 --> 00:13:38.501
game crashes their system.
00:13:38.501 --> 00:13:40.701
There's been multiple reports of that yeah that's.
00:13:40.701 --> 00:13:44.641
The seemingly the most affected because that seems to be what crashes linux the most i don't.
00:13:44.641 --> 00:13:48.221
Know definitely folks who've self-reported also having sort of known sketchy
00:13:48.221 --> 00:13:50.721
power supplies or sorts or situations sure that'll.
00:13:50.721 --> 00:13:51.181
Do it yeah.
00:13:51.181 --> 00:13:54.281
Uh it is worth calling out here uh this has all been uh
00:13:54.281 --> 00:13:56.961
the initial issue as part of fixing a bunch
00:13:56.961 --> 00:14:00.161
of other stuff this has all been the same person's work across the
00:14:00.161 --> 00:14:03.281
years uh felipe manana from seuss
00:14:03.281 --> 00:14:05.981
has been oh responsible for fixing up all
00:14:05.981 --> 00:14:08.721
kinds of butterfs issues and working on the file system and it
00:14:08.721 --> 00:14:11.721
just works out that like he was the person who made the
00:14:11.721 --> 00:14:14.561
change in 2018 and the one that
00:14:14.561 --> 00:14:17.381
made it more apparent now and is the person who figured all
00:14:17.381 --> 00:14:20.301
of that out and made an excellent explanation and commit
00:14:20.301 --> 00:14:23.161
in the fix to patch it all up so props there for
00:14:23.161 --> 00:14:27.901
sure and it's
00:14:27.901 --> 00:14:31.301
exactly these kind of things that make file systems you know so hard
00:14:31.301 --> 00:14:34.001
to debug i mean he he's been
00:14:34.001 --> 00:14:38.061