

This post is very welcome. It’s sure more relevant than many posts made in this instance.
Please continue to post whatever you like, as long as it’s on-topic.
This post is very welcome. It’s sure more relevant than many posts made in this instance.
Please continue to post whatever you like, as long as it’s on-topic.
The uppercase A in Axium.
Very quickly skimmed Cargo.toml
and main.rs
.
axum
is also not cool.axum
in the project description.lazy_static
and once_cell
, when OnceLock
has been stable since 1.70 and axum
’s MSRV is 1.75?min-sized-rust
flags?println!("{proto}://{ip}:{port}");
instead of
println!("{0}://{1}:{2}", proto, ip, port);
and the positional indices are redundant anyway.
tracing
, you should actually use tracing::error
instead of eprintln!("❌ ...")
.Okay. I will stop here.
Reads okay for the most part. But I like how we see the same point about AI as a feature in some more serious real-life projects. There, we frame it as “Rust makes it harder for a ‘contributor’ to sneak in LLM-generated crap”.
/mj this post was an experiment to see If I should start posting from my personal jerk archive here. But exactly as I expected and anticipated given the visibility in public feeds, this community has decent traffic, but none of the culture, or any familiarity whatsoever with the meta-ironic jerking style of the OG community. The lack of a separate meta sub/community is also not helpful since it forces users to /mj inline. But that separate community would have been public too, possibly compounding the problem.
It is indeed when paired with an optimizing assembler, a sophisticated static analysis tool in its own right. And just like Rust, you have greybeards hating on such provided safety because “meh it’s not close to the hardware anymore”, like that old man Mel.
Its called fetching it.
No. I was specifically thinking of webfinger
. That’s Lemmy’s (ActivityPub) way of checking if an id (user or community) exists or not. Then, an instance may “read” the remote community using its outbox (if requested), and a snapshot of that remote community would now exist in the local instance. That “snapshot” doesn’t get updated unless another attempt is made to view the now known remote community, AND a certain period have passed (It was 24 hours the last time I looked). In that second time, a user may actually need to make a second request (refresh/retry) to see the updates, and may need to do that after a few seconds (depending on how busy/fast instances are).
If at least one user however subscribes to that remote community, then the remote instance live-federates all updates from that community to the subscribed user’s local instance, and all these issues/complications go away.
You need subscribers from instances, not views. Without subscribers, an instance may have an outdated version of your community without updates. People may see your community because someone pinged* it recently, maybe via a search, and their instance grabbed your then outbox at that time.
Ideal Federation is achieved when you have 2+ subscribers from every instance federating with your community instance. One subscriber would be enough too, but people choose to nuke there accounts sometimes, and Lemmy has the option to really erase an account as if it never existed 😉
* or whatever Lemmy calls it, haven’t looked in a while.
make
uses multiple processes for parallelism, or what the blog post (below) calls “interprocess parallelism”. cargo/rustc has that and intraprocess parallelism for code generation (the backend) already. the plan is to have parallelism all the way starting from the frontend. This blog post explains it all:
Cool and all. But missing some experiments:
lto = "off"
strip = false
(for good measure)ignore nulls, ignore race conditions, choose go
#WebVibin’ #HumoriestDev #DockerFiddler
Oh, we got a nu-M$er here. lol.
Along the same vein, too many open source projects don’t factor in non-“gnu/linux” environments from the start.
No one is entitled to anything from open-source projects.
I spent time making sure one of my public tools was cross platform once. This was pre-Rust (a C project), and before CI runners were commonly available.
I did manage it with relative ease, but Mac/mac (what is it now?) without hardware or VMware wasn’t fun (or even supported/allowed). Windows was a space hog and it’s a shit non-POSIX OS created by shits anyway, and Cygwin/MSYS wouldn’t have cut it for multiple reasons including performance. The three major BSDs, however, were very easy (I had prior experience with FreeBSD, but it would have been easy in any case).
People seem to have forgotten that doing open-source was supposed to be fun first and for most. Or rather, the new generation seems to never have gotten that memo.
POSIX is usually where a good balance between fun and public service is struck. Whether Mac/mac is included depends on the project, AND the developers involved. With CLI tools, supporting Mac/mac is often easy, especially nowadays with CI runners. With GUIs, it’s more complicated/situational.
Windows support should always be seen as charity, not an obligation, for all projects where it’s not the primary target platform.
You need to call
./y.sh prepare
again
Aha! Good to know. And yes, improved documents would be of great help.
Thanks again for working on this.
But running
./y.sh prepare
./y.sh test --release
does work. That’s what gave me the impression that clean all
doesn’t actually clean everything!
Yeah, apologies for not communicating the issue clearly.
cp config.example.toml config.toml
./y.sh prepare
./y.sh build --sysroot
./y.sh clean all
# above commands finish with success
# below, building succeeds, but it later fails with "error: failed to load source for dependency `rustc-std-workspace-alloc`
./y.sh test --release
And then trying to use the “release” build fails:
% CHANNEL="release" ./y.sh cargo build --manifest-path tests/hello-world/Cargo.toml
[BUILD] build system
Finished `release` profile [optimized] target(s) in 0.03s
Using `/tmp/rust/rustc_codegen_gcc/build/libgccjit/d6f5a708104a98199ac0f01a3b6b279a0f7c66d3` as path for libgccjit
Compiling mylib v0.1.0 (/tmp/rust/rustc_codegen_gcc/tests/hello-world/mylib)
error[E0463]: can't find crate for `std`
|
= note: the `x86_64-unknown-linux-gnu` target may not be installed
= help: consider downloading the target with `rustup target add x86_64-unknown-linux-gnu`
= help: consider building the standard library from source with `cargo build -Zbuild-std`
For more information about this error, try `rustc --explain E0463`.
error: could not compile `mylib` (lib) due to 1 previous error
I will make sure to report issues directly in the future, although from account(s) not connected to this username.
Oh, and clean all
doesn’t work reliably. Since trying to build in release
mode after building in debug
mode then clean
ing is weirdly broken.
And It’s not clear from the README how to build in release
mode without running test --release
. And the fact that all combinations of --release-sysroot
and --release --sysroot
and --release --release-sysroot
exist doesn’t help 😉
I gave this a try for the first time. Non-LTO build worked. But LTO build failed:
x86_64-pc-linux-gnu-gcc-15.0.0: fatal error: ‘-fuse-linker-plugin’, but liblto_plugin.so not found
I don’t have the time to build gcc and test. But presumably, liblto_plugin.so
should be included with libgccjit.so
?
I only skimmed this. But my mind from the start immediately went to
struct CommonData { // common fields } enum VariantData { Variant1 { // Variant1 specific fields }, // same for other variants } struct Search { common: CommonData, variant: VariantData, }
but I reached the end and didn’t see it.
Isn’t this the way that comes to mind first for others too?