BACKAGAIN
V3.1 now active
well getting to this point has been a pain in the ass. at the time of writing (notably not the time of posting) I have spent 4 days getting myugii back up to a more or less stable running condition. gefs was no dice. after messing with it quite a lot i couldn't get it to boot with either mbr nor gpt boot records (yes I cleared the disk with dd /dev/zeros between attempts, because otherwise the installer kept trying to skip steps) I must have tried various different options something like 12 times, with full disk clears between each one, then mucking about in bios. using 9fat, efi, etc. eventually I gave up and went with what I already knew well enough and definitely worked in the past, cwfs. that worked immediately. well at least it got me on the ground floor. needing to remove no one and twiddle auth back and forth is a minor inconvenience compared to "no mbr" and bios loops. after that started the annoying task of bringing back all my backed up stuff. apparently there are tools to "migrate properly" including time manipulation shit to restore old history. I couldn't be bothered to learn about that. if you are interested the tool is called delorean apparently. anyway, a 9front install isn't really running until you can remote in and have your wallpaper setup. I never bothered with this before but it was always a thing i wanted to do, so I decided to set it up this time. That is having the wallpaper change every time I log in. I created several background images (4000x6000px) with several contiguous elements (myugii fannan motifs) and converted them to plan9 image format, then set my profile to bind a random one of them over the wallpaper before riostart, so when it launches it will be a random one of the 16 each time. sub rios will get the same wallpaper and there isn't a way to change it while it's running, but just having it be different each time I log in is nice enough. the absurd resolution of the images is for 2 reasons. My monitor has grown since last time I made background images and now repeats on it, so I wanted to not have that happen, also I will often run in profile orientation, so I wanted to support that well. 4000 wide is plenty for futureproofing for now. The other reason is not plan9 related at all. since getting a good printer I have come to realize that so called "high quality" images floating around online are woefully tiny. a 1920x1080 image printed at 1200ppi is less than 2 inches across. that's... terrible. of course I can blow them up but then when printed at 13x19 inches it looks fucking terrible. the minimum I will print is 3000 wide. printing smaller images is just a waste of paper. Not that I am planning on printing any of these, I simply think that since I have this new found hatred of finding images I like except they are tiny, I'd rather not make more undersized images. #absurdhighresgang anyway. that was more or less trivial. what wasn't trivial is for some reason (unknown to me at the time) i was no longer able to log in. clienttls hung up. so I switched back to the main machine's interface and started debugging. the messages were saying that /net/tls wasn't present. looking it appeared that it was indeed not present. mounted #a to /net and there it is, as it should be. try to log in. no dice. ok. weird. so then I go to my /cfg/$hostname/cpurc and add the bind there. reboot, and still... not working. it exists after that but for some reason tlssrv isn't able to find it. Auth is working correctly, key exchange is working correctly but tlssrv isn't able to start the connection because supposedly /net/tls doesn't exist. this was driving me crazy. I gave up. came back. banged my head. gave up. looked online for info. read man pages. gave up. took a nap. looked online again. read source files. etc. eventually i typed the magically correct sequence of letters into google for it to show me an old 9fans email chain where someone was having the same issue, and trying the same stuff that I was trying to fix it. and there was an answer. apparently the cpurc namespace is not inherited by other processes instead the /lib/namespace file is used to create the namespace. This makes some amount of sense as it could be a permission leak depending on the setup otherwise. but I was able to log in before. only then did I remember when i was copying my old data i did copy /lib from my backup on top of the current /lib because that's where the slov and muffet dbs live (among other things i added) apparently I clobbered the working namespace with a broken one. which is kinda sus to me as I was previously able to log in fine before the reinstall (it is completely possible that the previous install was so old it only used tcp connections not tls and this is why it didn't matter) anyway. I added the bind there and now i'm able to log in again. fonts are another pain point. it's not like a huge deal, but I really wanted to bee able to mix the typewriter font I use with another monospace font with more unicode coverage. ttfs should be able to do this, but apparently the font heights are completely incompatible so it makes the spacing between lines absurd. I resigned myself to using kurinto.mono. it's fine. it has decent coverage including kanji, it's just so "prim" it kinda fucks with my vibe. Maybe I'll get used to it. it's fine. anyway. muffet and slov are working, but I need to setup temuorin now before I can reconnect things. I haven't fully decided how I am going to set things up with it, because ideally myugii and temuorin would be in the same authdomain, but to do that I'd need to hardcode temuorin to connect to my home IP which can change. it would make a lot of things easy if I did it that way tho. temuorin could easily import files, or myugii could import temuorin's /net and run all the ip services here, and login would be seamless. the alternative is setting up secstore with keys, to authenticate with temuorin before exporting things, or foregoing the seamless updating I had before and manually pushing each time i update. To be honest, the reason I wanted to have that previously hasn't panned out. The idea was I could use my local storage and host files of any size and not have to worry about the 20g limit on temuorin. but the network connection is so bad bouncing across the world that in practice it just slows everything down. It'd probably be best to just go with a push on update model. I already have the scripts to do it, because gemini always was using a push system because I wanted it more bulletproof. during this maintenance for instance, the http site is out of date by like... a year. but the gemini pages are fully up to date ( to check if I was indeed not spitting bullshit I just checked and yup. it's there with the "going down for maintenance post, which is not on http, the most recent post there was march 2023... yikes.) ya. a push system is probably best. then I don't need to do much because temuorin and myugii don't need a persistent connection to each other, i don't even need to store keys, because I can provide them each update. but fuck if updates aren't going to take forever... well... I guess that means I have a good reason to add the ability for muffet to check for changes and only update the files that have actually changed that I have been thinking about for some time. there has been no easy way to do this up to the present. muffet creates all the files based on slov every time. so every single run the modification dates all are changed to the present. so checking modification times is out. doing a diff across the world is slower than just pushing all the files. so gemini did the pragmatic thing, it pushed all the gmi files every time. it did check if the media file existed at all and only pushed the ones that didn't, so updates didn't take an hour and 5 gigs every time. but still sending close to 200 files separately every update takes forever. another thing I might want to do is to tar.gz the changed files and push that then remote execute a 'tar xz' that way i'm not opening and closing so many connections. I guess we'll first need to see if just the dirty/clean updating cuts down the time enough for that to not matter or not. well i guess I have more shit to do. I was really hoping to have this migration done by now. T-T ____ TIME LAPSE ____ Well the muffet update did help but then I did end up needing to do the tar-ing dance to send the files over. all this did help. but. and this is a big but, it was still extremely slow. just updating all the html files every time would take something like 20 minutes, if i checked for changes that dropped to something like 15. painful. The issue is that using test to check update times is quite slow over the network. so ya. see the lsdif page for more about the fix for that. anyway. this has all been sorted. now a normal update happens in less than a minute, much of that time is just starting the rimport and rcpu calls which require auth. the actual checking what needs to be sent and sending it is quite fast now. Well it should all be done now. I have tested small updates of a few pages, with only a few additions, and text modification and it is much faster. taking on the order of a minute for those updates. This page will be going up with the first "big" update since getting all this to work. This change touches every generated html file, added several new pages, includes media additions for both html and gemini versions, and updated css. If this goes as planned (does anything ever?) then I feel find with calling this new system done. bleh. --post mortem-- Ya that took a while. but to be fair i did send over close to a gig in updated audio files and background images, and every single html page. There were a few things i ended up needing to fix. so there have been a few very minor changes here and there since this went live initially, but nothing to the infrastructure. so the new update system gets a gold star.
tags:
architectural lsdif muffet myugii_fannan orange safe temuorin