DEVS: "How stuff could work" - your feedback required

hammy

Member
Jun 1, 2019
65
32
18
UK
Dear fellow Devs,

There are some things it's becoming clear we should discuss, at the minimum to get an idea of where we want to go and then hopefully we can see how to get there from here.

It would be nice that people share their vision on how they see these things.

Try not to get bogged down with "everything should live under /opt/somepath" or "everyone must install tools from a .tardist" - will be tricky, but we see.

The goal is to get an idea for how people "see" scenarios playing out.

Questions I believe it's worth us getting a feeling about now (feel free to suggest things to add or delete in this list, that'd be nice)
  1. How do we see the experience for "fresh to IRIX or DEV"
  2. How do we see the experience for "mature DEV working on tooling"
  3. How do we introduce enough guard rails so that DEV A helping DEV B can reproduce problems with some confidence in (1) and (2)
  4. Do we imagine it's possible to "multi install" different versions (both tools + ports)?
  5. Do we support multi-arch (MIPS3/MIPS4) DEV/INSTALL
  6. Do we support mixed compiler trees (MIPSpro/GCC4/GCC8) DEV/INSTALL
  7. Do we support N32 and 64 bit? Shall we properly respect "lib32" and "lib64"
  8. Where is everything versioned - e.g. The ports and their dependencies
  9. Where is the base tooling versioned - e.g. Is this some custom ports tree with packaging included?
  10. How do we make these base tools reproducible and releasable by anyone?
  11. What checks are we going to put into place to ensure the quality of the things in (8,9,10)
  12. (Contentious, but needs a mention) - what kind of decision structure should we put in place in case of disputes
I realise this might sound a little bit too like "that's becoming a job" - but I think it's a good exercise we reflect a little on the scope of what we are attempting to work towards.

That's already some things to ponder and we see what falls out.

Kr,

Dan
 

onre

Administrator
Feb 8, 2019
59
18
8
1. Something like this: have a tarball, unpack that somewhere, set a path and start trying to compile stuff. Or, install a tardist which may do some of that for you.

2. Minimal hand-holding, possibility to decide prefixes for all parts of the toolchain to allow for quick testing of things - for example, changing from one binutils to another with a certain compiler.

3. The idea of having a script to start a shell with decent default environment is a solid one. This combined with known versions of compiler etc would be nice. Possibly at some point we could inject sgug and a number in the program version strings to make it easier to keep track of what's used by people?

4.-7. Why not? One part of the problem is the definition of "support".

4. One way of doing this would be the "virtual tree" I mentioned. Every built thing would end up in its own prefix, and a script would create a "virtual tree" of dirs and symlinks that would be a combination of desired versions. There are some apparent problems here though. For example, one could link against some library in the virtual tree getting whatever happens to be in there, or against a certain prefixed install to get that exact version. It's just an idea based on something I saw at work many years ago and which worked rather well.

7. After all it's just one extra flag to configure to get this.

8.-12. No idea yet.

I'll write more as I think more about this.
 

onre

Administrator
Feb 8, 2019
59
18
8
Also, building and packaging are two separate things and this should be kept in mind IMO. Whatever is required to package something is entirely different from getting it to build. Optimally whatever we'd use would allow DEV A to port something to IRIX and DEV B to package it so that changes are moved over easily and the build is repeatable on DEV B's box.
 

hammy

Member
Jun 1, 2019
65
32
18
UK
Thanks for joining in Onre - sorry for being slow to reply, nursing a sick cat these last few weeks.

My taste on these questions = only taste :)

Commands, paths, even the order I show below are just guestimates.

1) For rookie to port contributor (to SGUG/IRIX), I'd like to see
  • You install a tooling bundle (has IRIX DEV/header packages as hard dependency - i.e. you can't install it without them)
  • If you want to "free dev" - run sgugdevshell for locked base tools and knock yourself out (should respect user choice of $SHELL)
  • If ports are your thing, you clone somerepo using sgugrepoinit /where/my/repo/is https://path/to/repo
  • You choose your ARCH/COMPILER/INSTALL_DIR by doing vi /where/my/repo/is/something
  • If you want to use stable ports from the ports tree and your choice of ARCH/COMPILER/INSTALL_DIR, sgugdevshell /where/my/repo/is, cd a package and ./buildit
  • If you want to contribute a port, fork the ports repo, clone that and do someinstructions go here
2) For mature DEV on tooling, I'd like to see
  • As per rookie steps
  • You fork and then gitclone some tooling repo
  • You get your ARCH/COMPILER/INSTALL_DIR fixed to the tooling base values
  • There is the possiblity to install multiple "tooling bundle" versions in parallel - necessary to better debug problems with tooling
  • Once a bug is identified and fixed, a push and validation by other SGUG members happens
  • Everyone is good with some fix, new tooling release ID assigned
  • A scripted build of the new tooling takes place with the release ID
  • Testing confirmed, ports repo updates to require new tooling release ID
3) Guard rails
  • Shouldn't be possible to run ports builds without known good tooling version
  • Shouldn't be possible to install know good tooling versions without IRIX DEV packages
  • sgugdevshell
  • env |grep SGUG_ENV (as example) should give a nice clear summary of environment, versions, ARCH etc
4) Multi-install
  • I'd prefer not to have to uninstall/reinstall when comparing between versions of tooling
  • Same for ports, if I'm testing/bug fixing it's important that two versions can be in place and testable
5) Multi-arch
  • Would be nice to be able to use MIPS4 tools for people only interested in that platform, possible for ports, maybe not possible for platform tools
6) Mixed compiler trees
  • MIPSpro and gcc are not compatible even when using the IRIX as/ld (C++, plus later ELF sections aren't compatible, world of pain there)
  • Personally I feel it's a mistake to have to support people playing this game
  • I'd force a restriction on setting ARCH/COMPILER across all ports
7) N32 and N64
  • Would be great as per (1) that we let people choose. For those with >4GB RAM, you can't exploit that properly in a single process unless we allow 64bit
  • For me, this means we must respect /lib32 and /lib64
  • For me, O32 is gone
8 + 9) Versioning
  • Base tooling and the builder/scripts used to release it must be versioned
  • Ports too, of course
  • In didbs it builds everything - so the base tools it relies on are a "known good version"
  • We can get a stability bonus from the base tooling being built from things bootstrapped from the ports tree. i.e. don't rely on host tools for the building of tooling
10) Tooling release
  • Already touched on in (2) above
  • Important to me that we don't have a "gatekeeper" - this should be scriptable and repeatable, anyone given the right 8 CPU ONYX monster can make a release
11) SGUG Q/A
  • Anyone building something needs to sure it's a reproduceable build
  • For both ports and tooling it would be nice that we had some idea of "alpha" package, not yet primetime - would be nice to have this in parallel with (any) current tested version (upgrades). Maybe having multi-ports tree solves this one.
  • I'd like to see some kind of assurance that people have actually tested a package (e.g. run make check or equivalent). It's normal that for some packages we accept test #X etc failed, but please can we document that and why we think it's ok.
12) Dispute decision structure
  • Fair to say I'm not touching this with a barge pole yet :)
  • I also have no idea hehe
Kr

Dan
 

onre

Administrator
Feb 8, 2019
59
18
8
I think I agree on everything here, really.

6) Sounds like the kind of scenario where my favourite warranty clause would be useful: "If it breaks, you get to keep both pieces."

7) sounds good. For me, o32 may not actually be completely dead, it just smells funny.

10) definitely.

11) Some kind of WIP concept would sure be nice. One way of ensuring make check results would be to simply compare the output and see if at least as many tests passed as did on the developer's box.

12) Fistfights at Toijala market square.

I'm also interested in ideas on a more concrete level. I've already caused some mess by giving people access to unfinished stuff and would like to keep that to minimum.
 

Unxmaal

Administrator
Feb 8, 2019
44
16
8
Additions or commentary
1. It's extremely tough for people with little to no experience to "jump in" to this stuff.

We should set that expectation up front: there will be no magical way to just "get" C development, porting, etc. It takes time and patience.

An option could be to add a YouTube video series where some of our more experienced devs could record themselves porting software.

12. As per our Discord moderation method, we have a core team of "deciders" who vote on disputes. A tie shall be broken by a feat of strength or alcoholic resistance at Toijala market square.


*. As an overall guideline, we should aim for large functionality over features. For example, nobody cares if we support MIPSPro and gcc 9 if our porting environment is impossible to set up, or our packages are broken.
 

Raion

New member
Jun 21, 2019
8
0
1
Virginia
  • MIPSpro and gcc are not compatible even when using the IRIX as/ld (C++, plus later ELF sections aren't compatible, world of pain there)
Yeah, there's even a few issues I've found even if you stick to C code when mixing the two. See trying to compile libarchive using a GCC-compiled XZ using binutils. For C++, I'd agree GCC is probably a better, and more sustainable option.

One issue I don't think was touched on. How do you propose source distribution?

One idea would be:

Using a Git or Mercurial repo, have a -CURRENT branch and a -STABLE branch. Every quarter, the STABLE branch is pruned and offered as a tarball. CURRENT would be for all real-time commits. You could easily add more branches, but I think with the numbers working here it's unwise to try doing them monthly prunes - as this would ultimately cause issues with bugtesting and regressions. What is supported for end-user usage should be as stable as possible, with less regard for the latest and greatest things.
  • Personally I feel it's a mistake to have to support people playing this game
So you think it's okay to just bin MIPSPro for C?
 

onre

Administrator
Feb 8, 2019
59
18
8
I'd say people can just define CC and CXX (or their equivalents in the proposed build system) to point to MIPSpro if they want and see whether it works. It wouldn't be "supported", though, whatever that means - again, it boils down to the definition of "supported" in case of a project where a handful of enthusiasts try to get stuff running on a long-abandoned operating system. In any case, with most of the source we're compiling we're deep in the unsupported-land already.
 

onre

Administrator
Feb 8, 2019
59
18
8
I've solved source distribution for myself just by forking stuff on Github, checking out a release tag - say, v1.2.3 - into a detached HEAD state and starting off a new branch there with a suffix - in our example case v1.2.3-irix. One could tag commits in that branch to mark stable versions.
 

onre

Administrator
Feb 8, 2019
59
18
8
See trying to compile libarchive using a GCC-compiled XZ using binutils.
Provide details in a separate thread, please. I've done this many times. If I had to guess, this is a case where a library depending on something does not know where to look for it. More of a build script issue than a binutils issue.
 

hammy

Member
Jun 1, 2019
65
32
18
UK
So you think it's okay to just bin MIPSPro for C?
Maybe I got misunderstood - I'm not saying bin MIPSpro - just don't support people who want to build packages X,Y with MIPSpro, packages Z, AA with gcc. Mixing compilers. Nothing wrong with a MIPSpro tree - I think it reasonable to support what we can - knowing there's stuff that's never going to work with it.

I've solved source distribution for myself just by forking stuff on Github, checking out a release tag - say, v1.2.3 - into a detached HEAD state and starting off a new branch there with a suffix - in our example case v1.2.3-irix. One could tag commits in that branch to mark stable versions.
That kinda works ok for small projects but is quite the pain when we are talking about something like GCC. We can certainly track "WIP" changes that way.

But for the actual "ports" source distribution - I'd follow the standard practice of patches against known good release tarballs mirroring changes we track in github.

e.g. For my native gcc8 build in didbs, it's actually a patch applied to the official gcc8 tarball that gets used rather than a GIT checkout. (In didbs it's one big patch which isn't ideal, correct approach would be multi-patch).
 

onre

Administrator
Feb 8, 2019
59
18
8
Additionally, large git repos like gcc are really slow even on the Intel boxes of today. Patches against releases sounds reasonable - creation of these could easily be automated so that once you get something into a state where it builds, you'd just run sgugmkpatchwith appropriate parameters, one of which would be a pointer to an official distribution, and you end up with a patch.
 

hammy

Member
Jun 1, 2019
65
32
18
UK
I also wanted to touch on this (it's about having some assurance of package testing):

11) Some kind of WIP concept would sure be nice. One way of ensuring make check results would be to simply compare the output and see if at least as many tests passed as did on the developer's box.
I did this originally with didbs - the problem is that the output of "make check" can result in sometimes megabytes of output (looking at you, autotools).

Perhaps one solution here: All checks must pass - that means any tests that would fail are "patched out" - and that's the documentation showing the failures were accepted.
 

Unxmaal

Administrator
Feb 8, 2019
44
16
8
Additionally, large git repos like gcc are really slow even on the Intel boxes of today. Patches against releases sounds reasonable - creation of these could easily be automated so that once you get something into a state where it builds, you'd just run sgugmkpatchwith appropriate parameters, one of which would be a pointer to an official distribution, and you end up with a patch.
See https://www.atlassian.com/blog/git/handle-big-repositories-git for solutions for this type of problem.
 

Raion

New member
Jun 21, 2019
8
0
1
Virginia
Maybe I got misunderstood - I'm not saying bin MIPSpro - just don't support people who want to build packages X,Y with MIPSpro, packages Z, AA with gcc. Mixing compilers. Nothing wrong with a MIPSpro tree - I think it reasonable to support what we can - knowing there's stuff that's never going to work with it.
My question was more along the lines of if you're suggesting hardcoding stuff for GCC. But since you answered that, it's irrelevant/answered now.

I did this originally with didbs - the problem is that the output of "make check" can result in sometimes megabytes of output (looking at you, autotools).

Perhaps one solution here: All checks must pass - that means any tests that would fail are "patched out" - and that's the documentation showing the failures were accepted.
Make check is a good tool - but it's not always indicative, depending on the dev, that the package is 100% working. As always, user-reported issues with precomp stuff is probably not to be overlooked.
 

onre

Administrator
Feb 8, 2019
59
18
8
There are two practical problems with these solutions when working with GCC. First, for some reason you can't specify the shallow copy depth as a commit. Given the amount of commits in the GCC repo, it's really hard to say "give me everything starting from 4.7.4" as the value of "n commits from now" changes every time something is committed. Second, having a single branch works for certain situations, but with GCC it's quite common to have to look into other branches to see what's changed since something last worked.
 
Last edited:

onre

Administrator
Feb 8, 2019
59
18
8
The usability of that version is questionable. :D It's more of a work in progress.
 

Knezzen

New member
Jun 24, 2019
7
2
3
Sweden
www.macintoshgarden.org
Everything i do is questionable. ;) In the end i had to move the MIPSpro symlinks out of /usr/bin to get the configure script to let gcc set the linker. I’m still terrible at this, but I’m learning.
 

About us

  • Silicon Graphics User Group (SGUG) is a community for users, developers, and admirers of Silicon Graphics (SGI) products. We aim to be a friendly hobbyist community for discussing all aspects of SGIs, including use, software development, the IRIX Operating System, and troubleshooting, as well as facilitating hardware exchange.

User Menu