git cp orig copy

Problem: I want to copy a file in git and preserve its history.

Solution: Holy guacamole!:

git checkout -b dup
git mv orig copy
git commit --author="Greg " -m "cp orig copy"
git checkout HEAD~ orig
git commit --author="Greg " -m "restore orig"
git checkout -
git merge --no-ff dup

RHEL 9 on libvirt and KVM

Problem: you create and VM like you always did, but RHEL 9 bombs with:

Fatal glibc error: CPU does not support x86-64-v2

Solution: as Dan Berrange explains in bug #2060839, a traditional default CPU model qemu64 is no longer sufficient. Unfortunately, there's no "qemu64-v2". Instead, you must select one of the real CPUs.

<cpu mode='host-model' match='exact' check='none'>
<model fallback='forbid'>Broadwell-v4</model>
</cpu>

Suvorov vs Zubrin

Bro, u mad?

Zubrin lies like he breathes, not bothering even calculate in Excel, not to mention take integrals!

But they continue to believe him — because it's Zubrin!

When they discussed his "engine on salt of Uranium" [NSWR — zaitcev] I wrote that the engine will not work in principle, because a reactor can only work in case of effective deceleration of neutrons, but already at the temperature of the moderator of 3000 degrees (like in KIWI), the cross-section of fission decreases 10 times, and the critical mass increases proportionally. But nobody paid attention — who am I, who is Zubrin!

The core has to be hot, and the moderator has to be cool, this is essential.

But they continued to fantasize, is it going to be 100,000 degrees in there, or only 10,000?

No matter how much I pointed out the principal contradiction — here is the cold sub-critical solution, and here is super-critical plasma, only in a meter or two away, and therefore neutrons from this plasma fly into the solution — which will inevitably capture them, decelerate, and react, and therefore the whole concept goes down the toilet.

But they disucss this salt engine over decades, without trying to check Zubrin's claims.

All "normal" nuclear reactors work only in the region between "first" (including the delayed neutrons) and "second" (with fast neutrons) criticalities. Only in this region, control of the reactor is possible. By the way, the difference in breeding ratios is only 1.000 and 1.007 for slow neutrons and 1.002 for the fast ones (in case of Plutonium, this much is the case even for slow neutrons).

And by the way, average delay for delayed neutrons is 0.1 seconds! The solution has to remain in the active zone for 100 milliseconds, in order to capture the delayed neutrons! Not even the solid phase RD-0410 reached that much.

Therefore, Zubrin's engine must be critical at the prompt neutrons. And because the moderator underperforms because it's hot, prompt neutrons become indistinguishable from fission neutrons, and therefore the density of plasma has to be the same as density of metal in order to achieve criticality — that is to say, almost 20 g/sm^3 for Uranium.

But this persuades nobody, because Zubrin is Zubrin, and who are you?

It all began as a discussion of Mars Direct among geeks, but escalated quickly.

Python subprocess and stderr

Suppose you want to create a pipeline with the subprocess and you want to capture the stderr. A colleague of mine upstream wrote this:


    p1 = subprocess.Popen(cmd1,
      stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True)
    p2 = subprocess.Popen(cmd2, stdin=p1.stdout,
      stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True)
    p1.stdout.close()
    p1_stderr = p1.communicate()
    p2_stderr = p2.communicate()
    return p1.returncode or p2.returncode, p1_stderr, p2_stderr

Unfortunately, the above saves the p1.stdout in memory, which may come back to bite the user once the amount piped becomes large enough.

I think the right answer is this:


    with tempfile.TemporaryFile() as errfile:
        p1 = subprocess.Popen(cmd1,
          stdout=subprocess.PIPE, stderr=errfile, close_fds=True)
        p2 = subprocess.Popen(cmd2, stdin=p1.stdout,
          stdout=subprocess.PIPE, stderr=errfile, close_fds=True)
        p1.stdout.close()
        p2.communicate()
        p1.wait()
        errfile.seek(0)
        px_stderr = errfile.read()
    return p1.returncode or p2.returncode, px_stderr

Stackoverflow is overflowing with noise on this topic. Just ignore it.

Cura on Fedora is dead, use Slic3r

Was enjoying my Prusa i3S for a few months, but had to use my Lulzbot Mini today, and it was something else.

In the past, I used the cura-lulzbot package. It went through difficult times, with a Russian take-over and Qtfication. But I persisted in suffering, because, well, it was turnkey and I was a complete novice.

So, I went to install Cura on Fedora 35, and found that package cura-lulzbot is gone. Probably failed to build, and with spot no longer at Red Hat, nobody was motivated enough to keep it going.

The "cura" package is the Ultimaker Cura. It's an absolute dumpster fire of pretend open source. Tons of plug-ins, but basic materials are missing. I print in BASF Ultrafuse ABS, but the nearest available material is the PC/ABS mix.

The material problem is fixable with configuration, but a more serious problem is that UI is absolutely bonkers with crazy flashing - and it does not work. They have menus that cannot be reached: as I move cursor into a submenu, it disappears. Something seriously broken in Qt on F35.

BTW, OpenSCAD suffers from incorrect refresh too on F35. It's super annoying, but at least it works, mostly.

Fortunately, "dnf remove cura" also removes 743 trash packages that it pulls in.

Then, I installed Slic3r, and that turned out to be pretty dope. It's a well put together package, and it has a graphical UI nowadays, operation of which is mostly bug-free and makes sense.

However, my first print popped off. As it turned out, Lulzbot requires the initial sequence that auto-levels it, and I missed that. I could extract it from my old dot files, but in the end I downloaded a settings package from Lulzbot website.

PyPI is not trustworthy

I was dealing with a codebase S at work that uses a certain Python package N (I'll name it in the end, because its identity is so odious that it will distract from the topic at hand). Anyhow, S failed tests because N didn't work on my Fedora 35. That happened because S installed N with pip(1), which pulls from PyPI, and the archive at PyPI contained broken code.

The code for N in its source repository was fine, only PyPI was bad.

When I tried to find out what happened, it turned out that there is no audit trail for the code in PyPI. In addition, it is not possible to contact listed maintainers of N in PyPI, and there is no way to report the problem: the problem tracking system of PyPI is all plastered with warnings not to use it for problems with packages, but only with PyPI software itself.

By fuzzy-matching provided personal details with git logs, I was able to contact the maintainers. To my great surprise, two out of three even responded, but they disclaimed any knowledge of what went on.

So, an unknown entity was able to insert a certain code into a package at PyPI, and pip(1) was downloading it for years. This only came to light because the inserted code failed on my Fedora test box.

At this point I can only conclude that PyPI is not trustworthy.

Oh, yeah. The package N is actually nose. I am aware that it was dead and unmaintained, and nobody should be using it anymore, least of all S. I'm working on it.

Adventures in tech support

OVH was pestering me about migrating my VPS from its previous range to the new (and more expensive) one. I finally agreed to that. Migrated the VM to the new host, it launches with no networking. Not entirely unexpected, but it gets better.

The root cause is the DHCP server at OVH returning a lease with netmask /32. In that situation, it's not possible to add a default route, because the next hop is outside of the netmask.

Seems like a simple enough problem, so I filed a ticket in OVH support, basically saying "your DHCP server supplies incorrect netmask, please fix it." Their ultimate answer was, and I quote:

Please note that the VPS IP and our failover IPs are a static IPs, configuring the DHCP server may cause a network issues.

I suppose I knew what I was buying when I saw the price for these OVH VMs. It was too cheap to be true. Still, disappointing.

Scalability of a varying degree

Seen at official site of Qumulo:

Scale

Platforms must be able to serve petabytes of data, billions of files, millions of operations, and thousands of users.

Thousands of users...? Isn't it a little too low? Typical Swift clusters in Telcos have tens of millions of users, of which tens or hundreds of thousands are active simultaneously.

Google's Chumby paper has a little section on scalability problem with talking to a cluster over TCP/IP. Basically at low tens of thousands you're starting to have serious issues with kernel sockets and TIME_WAIT. So maybe that.