I'm kinda in the opposite camp. After doing a bunch of VB in my teens and tweens, I finally learned Java, C, and C++ in college, settling on mostly C for personal and professional projects. I became a core developer of Xfce and worked on that for 5 years.
Then I moved into backend development, where I was doing all Java, Scala, and Python. It was... dare I say... easy! Sure, these kinds of languages bring with them other problems, but I loved batteries-included standard libraries, build systems that could automatically fetch dependencies -- and oh my, such huge communities with open-source libraries for nearly anything I could imagine needing. Even if most of the build systems (maven, sbt, gradle, pip, etc.) have lots of rough edges, at least they exist.
Fast forward 12 years, and I find myself getting back in to Xfce. Ugh. C is such a pain in the ass. I keep reinventing wheels, because even if there's a third-party library, most of the time it's not packaged on many of the distros/OSes our users use. Memory leaks, NULL pointer dereferences, use-after-free, data races, terrible concurrency primitives, no tuples, no generics, primitive type system... I hate it.
I've been using Rust for other projects, and despite it being an objectively more difficult language to learn and use, I'm still much more productive in Rust than in C.
I think Rust is harder to learn, but once you grok it, I don't think it's harder to use, or at least to use correctly. It's hard to write correct C because the standard tooling doesn't give you much help beyond `-Wall`. Rust's normal error messages are delightfully helpful. For example, I just wrote some bad code and got:
--> src/main.rs:45:34
|
45 | actions.append(&mut func(opt.selected));
| ---- ^^^^^^^^^^^^ expected `&str`, found `String`
| |
| arguments to this function are incorrect
|
help: consider borrowing here
|
45 | actions.append(&mut func(&opt.selected));
|
I even had to cheat a little to get that far, because my editor used rust-analyzer to flag the error before I had the chance to build the code.
Also, I highly recommend getting into the habit of running `cargo clippy` regularly. It's a wonderful tool for catching non-idiomatic code. I learned a lot from its suggestions on how I could improve my work.
I started programming with C a long time ago, and even now, every few months, I dream of going back to those roots. It was so simple. You wrote code, you knew roughly which instructions it translated to, and there you went!
Then I try actually going through the motions of writing a production-grade application in C and I realise why I left it behind all those years ago. There's just so much stuff one has to do on one's own, with no support from the computer. So many things that one has to get just right for it to work across edge cases and in the face of adversarial users.
If I had to pick up a low-level language today, it'd likely be Ada. Similar to C, but with much more help from the compiler with all sorts of things.
> I started programming with C a long time ago, and even now, every few months, I dream of going back to those roots. It was so simple. You wrote code, you knew roughly which instructions it translated to, and there you went!
Related-- I'm curious what percentage of Rust newbies "fighting the borrow checker" is due to the compiler being insufficiently sophisticated vs. the newbie not realizing they're trying to get Rust to compile a memory error.
If you come from C to Rust to basically have to rewire your brain. There are some corner cases that are wrong in Rust, but mostly you have to get used to a completely new way of thinking about object lifetimes and references to objects.
On x86-type machines, you still have a decent chance, because the instructions themselves are so complicated and high-level. It's not that C is close to the metal, it's that the metal has come up to nearly the level of C!
I wouldn't dare guess what a compiler does to a RISC target.
(But yes, this was back in the early-to-mid 2000s I think. Whether that is a long time ago I don't know.)
I'd call it a while ago, but not a long time. Long time to me is more like 70s or 80s. I was born in 1996 so likely I'm biased: "before me=long time". It would be interesting to do a study on that. Give the words, request the years, correlate with birthyear, voila
Also, COBOL and FORTRAN. FORTRAN is still being developed and one of the languages supported as first class citizen by MPI.
There's a big cloud of hype at the bleeding edge, but if you dare to look beyond that cloud, there are many boring and well matured technologies doing fine.
When Ada was first announced, I rushed to read about it -- sounded good. But so far, never had access to it.
So, now, after a long time, Ada is starting to catch on???
When Ada was first announced, back then, my favorite language was PL/I, mostly on CP67/CMS, i.e., IBM's first effort at interactive computing with a virtual machine on an IBM 360 instruction set. Wrote a little code to illustrate digital Fourier calculations, digital filtering, and power spectral estimation (statistics from the book by Blackman and Tukey). Showed the work to a Navy guy at the JHU/APL and, thus, got "sole source" on a bid for some such software. Later wrote some more PL/I to have 'compatible' replacements for three of the routines in the IBM SSP (scientific subroutine package) -- converted 2 from O(n^2) to O(n log(n)) and the third got better numerical accuracy from some Ford and Fulkerson work. Then wrote some code for the first fleet scheduling at FedEx -- the BOD had been worried that the scheduling would be too difficult, some equity funding was at stake, and my code satisfied the BOD, opened the funding, and saved FedEx. Later wrote some code that saved a big part of IBM's AI software YES/L1. Gee, liked PL/I!
When I started on the FedEx code, was still at Georgetown (teaching computing in the business school and working in the computer center) and in my appartment. So, called the local IBM office and ordered the PL/I Reference, Program Guide, and Execution Logic manuals. Soon they arrived, for free, via a local IBM sales rep highly curious why someone would want those manuals -- sign of something big?
> So, now, after a long time, Ada is starting to catch on???
Money and hardware requirements.
Finally there is a mature open source compiler, and our machines are light years beyond those beefy workstations required for Ada compilers in the 1980's.
I fully understand that sentiment. For several years now, I have also felt the strong urge to develop something in pure C. My main language is C++, but I have noticed over and over again that I really enjoy using the old C libraries - the interfaces are just so simple and basic, there is no fluff. When I develop methods in pure C, I always enjoy that I can concentrate 100% on algorithmic aspects instead of architectural decisions which I only have to decide on because of the complexity of the language (C++, Rust). To me, C is so attractive because it is so powerful, yet so simple that you can hold all the language features in your head without difficulty.
I also like that C forces me to do stuff myself. It doesn't hide the magic and complexity. Also, my typical experience is that if you have to write your standard data structures on your own, you not only learn much more, but you also quickly see possibly performance improvements for your specific use case, that would have otherwise been hidden below several layers of library abstractions.
This has put me in a strange situation: everyone around me is always trying to use the latest feature of the newest C++ version, while I increasingly try to get rid of C++ features. A typical example I have encountered several times now is people using elaborate setups with std::string_view to avoid string copying, while exactly the same functionality could've been achieved by fewer code, using just a simple raw const char* pointer.
About 16 years ago I started working with a tech company that used "C++ as C", meaning they used a C++ compiler but wrote pretty much everything in C, with the exception of using classes, but more like Python data classes, with no polymorphism or inheritance, only composition. Their classes were not to hide, but to encapsulate. Over time, some C++ features were allowed, like lambdas, but in general we wrote data classed C - and it screamed, it was so fast. We did all our own memory management, yes, using C style mallocs, and the knowledge of what all the memory was doing significantly aided our optimizations, as we targeted to be running with on cache data and code as much as possible. The results were market leading, and the company's facial recognition continually lands in the top 5 algorithms at the annual NIST FR Vendor test.
Slightly better ergonomics I suppose. Member functions versus function pointers come to mind, as do references vs pointers (so you get to use . instead of ->)
Yeah, slightly better ergonomics. Although we could, we simply did not use function pointers, we used member functions from the data class the data sat inside. We really tried to not focus on the language and tools, but to focus on the application's needs in the context of the problem it solves. Basically, treat the tech as a means to an end, not as a goal in itself.
Try doing C with a garbage collector ... it's very liberating.
Do `#include <gc.h>` then just use `GC_malloc()` instead of `malloc()` and never free. And add `-lgc` to linking. It's already there on most systems these days, lots of things use it.
You can add some efficiency by `GC_free()` in cases where you're really really sure, but it's entirely optional, and adds a lot of danger. Using `GC_malloc_atomic()` also adds efficiency, especially for large objects, if you know for sure there will be no pointers in that object (e.g. a string, buffer, image etc).
There are weak pointers if you need them. And you can add finalizers for those rare cases where you need to close a file or network connection or something when an object is GCd, rather than knowing programmatically when to do it.
But simply using `GC_malloc()` instead of `malloc()` gets you a long long way.
You can also build Boehm GC as a full transparent `malloc()` replacement, and replacing `operator new()` in C++ too.
At first I really liked this idea, but then I realised the size of stack frames is quite limited, isn't it? So this would work for small data but perhaps not big data.
In theory, this is a compiler implementation detail. The compiler may chose to put large stacks in the heap, or to not even use a stack/heap system at all. The semantics of the language are independent of that.
In practice, stack sizes used to be quite limited and system-dependent. A modern linux system will give you several megabites of stack by default (128MB in my case, just checked in my linux mint 22 wilma). You can check it using "ulimit -all", and you can change it for your child processes using "ulimit -s SIZE_IN_KB". This is useful for your personal usage, but may pose problems when distributing your program, as you'll need to set the environment where your program runs, which may be difficult or impossible. There's no ergonomical way to do that from inside your C program, that I know of.
I think one of the nice things about C is that since the language was not designed to abstract e.g.: heap is that it is really easy to replace manual memory management with GC or any other approach to manage memory, because most APIs expects to be called with `malloc()` when heap allocation is needed.
I think the only other language that has a similar property is Zig.
> Odin is a manual memory management based language. This means that Odin programmers must manage their own memory, allocations, and tracking. To aid with memory management, Odin has huge support for custom allocators, especially through the implicit context system.
Interesting that I was thinking of a language that combined Zig and Scala to allocate memory using implicits and this looks exactly what I was thinking.
Not that I actually think this is a good idea (I think the explicitly style of Zig is better), but it is an idea nonetheless.
I never liked that you have to choose between this and C++ though. C could use some automation, but that's C++ in "C with classes" mode. The sad thing is, you can't convince other people to use this mode, so all you have is either raw C interfaces which you have to wrap yourself, or C++ interfaces which require galaxy brain to fully grasp.
I remember growing really tired of "add member - add initializer - add finalizer - sweep and recheck finalizers" loop. Or calculating lifetime orders in your mind. If you ask which single word my mind associates with C, it will be "routine".
C++ would be amazing if its culture wasn't so obsessed with needless complexity. We had a local joke back then: every C++ programmer writes heaps of C++ code to pretend that the final page of code is not C++.
I completely agree with this sentiment. That's why I wrote Datoviz [1] almost entirely in C. I use C++ only when necessary, such as when relying on a C++ dependency or working with slightly more complex data structures. But I love C’s simplicity. Without OOP, architectural decisions become straightforward: what data should go in my structs, and what functions do I need? That’s it.
The most inconvenient aspect for me is manual memory management, but it’s not too bad as long as you’re not dealing with text or complex data structures.
> A typical example I have encountered several times now is people using elaborate setups with std::string_view to avoid string copying, while exactly the same functionality could've been achieved by fewer code, using just a simple raw const char* pointer.
C++ can avoid string copies by passing `const string&` instead of by value. Presumably you're also passing around a subset of the string, and you're doing bounds and null checks, e.g.
string_view is just a char* + len; which is what you should be passing around anyway.
Funnily enough, the problem with string view is actually C api's, and this problem exists in C. Here's a perfect example: (I'm using fopen, but pretty much every C api has this problem).
FILE* open_file_from_substr(const char* start, int len)
{
return fopen(start);
}
void open_files()
{
const char* buf = "file1.txt file2.txt file3.txt";
for (int i = 0; i += 10; ++i) // my math might be off here, apologies
{
open_file_from_substr(buf + i, buf + i + 10); // nope.
}
}
> When I develop methods in pure C, I always enjoy that I can concentrate 100% on algorithmic aspects instead of architectural decisions which I only have to decide on because of the complexity of the language
I agree this is true when you develop _methods_, but I think this falls apart when you design programs. I find that you spend as much time thinking about memory management and pointer safety as you do algorithmic aspects, and not in a good way. Meanwhile, with C++, go and Rust, I think about lifetimes, ownership and data flow.
Variety is good. I got so used to working in pure C and older C++ that for a personal project I just started writing in C, until I realised that I don't have to consider other people and compatibility, so I had a lot of fun trying new things.
C was my first language and I quickly wrote my first console apps and a small game with Allegro. It feels incredibly simple in some aspects. I wouldn’t want to go back though. The build tools and managing dependencies feels outdated, somehow there is always a problem somewhere. Includes and the macro system feels crude. It’s easy to invoke undefined behavior and only realizing later because a different compiler version or flag now optimizes differently. Zig is my new C, includes a C compiler and I can just import C headers and use it without wrapper. Comptime is awesome. Build tool, dependency management and testing included. Cross compilation is easy. Just looks like a modern version of C. If you can live with a language that is still in development I would strongly suggest to take a look.
Otherwise I use Go if a GC is acceptable and I want a simple language or Rust if I really need performance and safety.
I sometimes write C recreationally. The real problem I have with it is that it's overly laborious for the boring parts (e.g. spelling out inductive datatypes). If you imagine that a large amount of writing a compiler (or similar) in C amounts to juggling tagged unions (allocating, pattern matching over, etc.), it's very tiring to write the same boilerplate again and again. I've considered writing a generator to alleviate much of the tedium, but haven't bothered to do it yet. I've also considered developing C projects by appealing to an embeddable language for prototyping (like Python, Lua, Scheme, etc.), and then committing the implementation to C after I'm content with it (otherwise, the burden of implementation is simply too high).
It's difficult because I do believe there's an aesthetic appeal in doing certain one-off projects in C: compiled size, speed of compilation, the sense of accomplishment, etc. but a lot of it is just tedious grunt work.
If you want to do microcontroller/embedded, I think C it still the overall best choice, supported by vendors.
Rust and Ada are probably slowly catching up.
You can certainly do entirely absurd things in Perl.
But it is a lot easier / safer work with.
You get / can get a wealth of information when you
the wrong thing in Perl.
With C
segmentation fault is not always easy to pinpoint.
However the tooling for C, with sone if the IDEs
out there you can set breakpoints/ walk through
the code in a debugger, spot more errors during compile
time.
There is a debugger included with Perk but after trying
to use it a few times I have given up on it.
Give me C and Visual Studio when I need debugging.
On the positive side, shooting yourself in the foot with C
is a common occurrence.
I have never had a segmentation fault in Perl.
Nor have I had any problems managing the memory,
the garbage collector appears to work well.
(at least for my needs)
def route = fn (request) {
if (request.method == GET ||
request.method == HEAD) do
locale = "en"
slash = if Str.ends_with?(request.url, "/") do "" else "/" end
path_html = "./pages#{request.url}#{slash}index.#{locale}.html"
if File.exists?(path_html) do
show_html(path_html, request.url)
else
path_md = "./pages#{request.url}#{slash}index.#{locale}.md"
if File.exists?(path_md) do
show_md(path_md, request.url)
else
path_md = "./pages#{request.url}.#{locale}.md"
if File.exists?(path_md) do
show_md(path_md, request.url)
end
end
end
end
}
> Virtual machines still suck a lot of CPU and bandwidth for nothing but emulation. Containers in Linux with cgroups are still full of RCE (remote command execution) and priviledge escalation. New ones are discovered each year. The first report I got on those listed 10 or more RCE + PE (remote root on the machine). Remote root can also escape VMs probably also.
A proper virtual machine is extremely difficult to break out of (but it can still happen [1]). Containers are a lot easier to break out of. I virtual machines were more efficient in either CPU or RAM, I would want to use them more, but it's the worst of both.
I've tried, but never succeeded in doing that; the complexity eventually seeps in through the cracks.
C++'s stdlib contains a lot of convenient features, writing them myself and pretending they aren't there is very difficult.
Disabling exceptions is possible, but will come back to bite you the second you want to pull in external code.
You also lose some of the flexibility of C, unions become more complicated, struct offsets/C style polymorphism isn't even possible if I remember correctly.
> C++'s stdlib contains a lot of convenient features, writing them myself and pretending they aren't there is very difficult.
I've never understood the motivation behind writing something in C++, but avoiding the standard library. Sure, it's possible to do, but to me, they are inseparable. The basic data types and algorithms provided by the standard library are major reasons to choose the language. They are relatively lightweight and memory-efficient. They are easy to include and link into your program. They are well understood by other C++ programmers--no training required. Throughout my career, I've had to work in places where they had a "No Standard Library" rule, but that just meant they implemented their own, and in all cases the custom library was worse. (Also, none of the companies could articulate a reason for why they chose to re-implement the standard library poorly--It was always blamed on some graybeard who left the company decades ago.)
Choosing C++ without the standard library seems like going skiing, but deliberately using only one ski.
Zig is a much simpler language than Rust. I'm a big Rust fan, but Rust is not even close to a drop-in replacement for C. It has a steep learning curve, and often requires thinking about and architecting your program much differently from how you might if you were using C.
For a C programmer, learning and becoming productive in Zig should be a much easier proposition than doing the same for Rust. You're not going to get the same safety guarantees you'd get with Rust, but the world is full of trade offs, and this is just one of them.
So this is a journey where starting in ruby, going through an SICP phase, and then eventually compromising that it isn't viable. it kinda seems like C is just the personal compromise of trying to maintain nerdiness rather than any specific performance needs.
I think it's a pretty normal pattern I've seen (and been though) of learning-oriented development rather than thoughtful engineering.
But personally, AI coding has pushed me full circle back to ruby. Who wants to mentally interpret generated C code which could have optimisations and could also have fancy looking bugs. Why would anyone want to try disambiguating those when they could just read ruby like English?
> But personally, AI coding has pushed me full circle back to ruby.
This happened to me too. I’m using Python in a project right now purely because it’s easier for the AI to generate and easier for me to verify. AI coding saves me a lot of time, but the code is such low quality there’s no way I’d ever trust it to generate C.
It depends on what you need the code for. If it’s something mission critical, then using AI is likely going to take more time than it saves, but for a MVP or something where quality is less important than time to market, it’s a great time saver.
Also there’s often a spectrum of importance even within a project, eg maybe some internal tools aren’t so important vs a user facing thing. Complexity also varies: AI is pretty good at simple CRUD endpoints, and it’s a lot faster than me at writing HTML/CSS UI’s (ie the layout and styling, without the logic).
If you can isolate the AI code to code that doesn’t need to be high quality, and write the code that doesn’t yourself, it can be a big win. Or if you use AI for an MVP that will be incrementally replaced by higher quality code if the MVP succeeds, can be quite valuable since it allows you to test ideas quicker.
I personally find it to be a big win, even though I also spend a lot of time fighting the AI. But I wouldn’t want to build on top of AI code without cleaning it up myself.
There are also some tasks I’ve learned to just do myself: eg I do not let the AI decide my data model/database schema. Data is too important to leave it up to an AI to decide. Also outside of simple CRUD operations, it generates quite inefficient database querying so if it’s on a critical path, perhaps write the queries yourself.
* Rust is vastly easier to get started with as a new programmer than C or C++. The quality and availability of documentation, tutorials, tooling, ease of installation, ease of dependency management, ease of writing tests, etc. Learning C basically requires learning make / cmake / meson on top of the language, and maybe Valgrind and the various sanitizers too. C's "simplicity" is not always helpful to someone getting started.
* The Rust compiler isn't particularly slow. LLVM is slow. Monomorphization hurts the language, but any other language that made the same tradeoff would see the same problems. The compiler has also gotten much much faster in the last few years and switching linkers or compiler backends makes a huge difference.
* Orgs that have studied tracked this don't find Rust to be less productive. Within a couple of months programmers tend to be just as if not more productive than they were previously with other languages. The ramp-up is probably slower than, say, Go, but it's not Scala / Haskell. And again, the tooling & built in test framework really helps with productivity.
* Rust applications are very rarely slower than comparable C applications
* Rust applications do tend to be larger than comparable C applications, but largely because of static vs. dynamic linking and larger debuginfo.
Rust evangelism is probably the worst part of Rust. Shallow comments stating Rust’s superiority read to me like somebody who wants to tell me about Jesus.
Definitely not true. One look at what a modern C compiler does to optimize the code you give it will disabuse you of that notion.
There's nothing special or magic about C code, and, if anything, C has moved further and further away from its "portable assembler" moniker over time. And compilers can emit very similar machine instructions for the same type of algorithm regardless of whether you're writing C, Rust, Go, Zig, etc.
Consider, for example, that clang/LLVM doesn't even really compile C. The C is first translated into LLVM's IR, which is then used to emit machine instructions.
Feature wise, yes. C forces you to keep a lot of irreducible complexity in your head.
> Rust has a much, much slower compiler than pretty much any language out there
True. But it doesn't matter much in my opinion. A decent PC should be able to grind any Rust project in few seconds.
> Rust applications are sometimes
Sometimes is a weasel word. C is sometimes slower than Java.
> Rust takes most people far longer to "feel" productive
C takes me more time to feel productive. I have to write code, then unit test, then property tests, then run valgrind, check ubsan is on. Make more tests. Do property testing, then fuzz testing.
Or I can write same stuff in Rust and run tests. Run miri and bigger test suite if I'm using unsafe. Maybe fuzz test.
I remember a project that used boost for very few things, but it included a single boost header in almost every file. That one boost header absolutely inflated the build times to insane levels.
Good for you. Like the grandparent commenter said, for others these tradeoffs might be important. E.g.:
> I am disappointed with how poorly Rust's build scales, even with the incremental test-utf-8 benchmark which shouldn't be affected that much by adding unrelated files. (...)
> I decided to not port the rest of quick-lint-js to Rust. But... if build times improve significantly, I will change my mind!
Look you're picking a memory unsafe language versus a safe one. Whatever meager gains you save on compilation times (and the link shows the difference is meager if you aren't on a MacOS, which I'm not) will be obliterated by losses in figuring out which UB nasal demon was accidentally released.
This is like that argument that dynamic types save time, because you can catch error in tests. But then have to write more tests to compensate, so you lose time overall.
> C takes me more time to feel productive. I have to write code, then unit test, then property tests, then run valgrind, check ubsan is on. Make more tests. Do property testing, then fuzz testing.
That's fair, but to me what drags C and C++ really down for me is the difficulty in building them. As I get older I just want to write the code and not mess with makefiles or CMake. I don't want starting a new project to be a "commitment" that requires me to sit down for two hours.
For me Rust isn't really competing against unchecked C. It's competing against Java and boy does the JVM suck outside of server deployments. C gets disqualified from the beginning, so what you're complaining about falls on deaf ears.
I'm personally suffering the consequences of "fast" C code every day. There are days where 30 minutes of my time are being wasted on waiting for antivirus software. Thinks that ought to take 2 seconds take 2 minutes. What's crazy is that in a world filled with C programs, you can't say with a good conscience that antivirus software is unnecessary.
> That's fair, but to me what drags C and C++ really down for me is the difficulty in building them. As I get older I just want to write the code and not mess with makefiles or CMake. I don't want starting a new project to be a "commitment" that requires me to sit down for two hours.
Also, integrating 3rd party code has always been one of the worst parts of writing a C or C++ program. This 3p library uses Autoconf/Automake, that one uses CMake, the other one just ships with a Visual Studio .sln file... I want to integrate them all into my own code base with one build system. That is going to be a few hours or days of sitting there and figuring out which .c and .h files need to be considered, where they are, what build flags and -Ddefines are needed, how the build configuration translates into the right build flags and so on.
On more modern languages, that whole drama is done with pip install or cargo install.
(And yes, I was considering if I should shout in capslock ;) )
I have seen so many fresh starts in Rust that went great during week 1 and 2 and then they collided with the lifetime annotations and then things very quickly got very messy. Let's store a texture pointer created from an OpenGL context based on in-memory data into a HashMap...
impl<'tex,'gl,'data,'key> GlyphCache<'a> {
Yay? And then your hashmap .or_insert_with fails due to lifetime checks so you need a match on the hashmap entry and now you're doing the key search twice and performance is significantly worse than in C.
Or you need to add a library. In C that's #include and a -l linker flag. In Rust, you now need to work through this:
> Or you need to add a library. In C that's #include and a -l linker flag. In Rust, you now need to work through [link to cargo docs]
This is just bizarre to me, the claim that dependency management is easier in C projects than in Rust. It is incredibly rare that adding a dependency to a C project is just an #include and -l flag away. What decent-sized project doesn't use autotools or cmake or meson or whatever? Adding a dependency to any of those build systems is more work than adding a single, short line to Cargo.toml.
And even if you are just using a hand-crafted makefile (no thank you, for any kind of non-trivial, cross-platform project), how do you know that dependency is present on the system? You're basically just ignoring that problem and forcing your users to deal with it.
The best feature of C is the inconvenience of managing dependencies. This encourages a healthy mistrust of third-party code. Rust is unfortunately bundled with an excellent package manager, so it's already well on its way to NPM-style dependency hell.
Can't help but agree, as much as I prefer Rust over C.
On the other hand, C definitely goes too far in to the opposite extreme. I am very tired of reinventing wheels in C because integrating third-party dependencies is even more annoying than writing and maintaining my own versions of common routines.
Long compile times with Rust don't really bother me that much. If it's someone else's program that I just want to build and run for myself, the one-time hit of building it isn't a big deal. I can be patient.
If it's something I'm actively developing, the compile is incremental, so it doesn't take that long.
What does often take longer than I'd like is linking. I need to look into those tricks where you build all the infrequently-changing bits (like third-party dependent crates) into a shared library, and then linking is very quick. For debug builds, this could speed up my development cycle quite a bit.
It is when the root cause is tooling, not language features.
You don't need to wait for long compile times in Haskell if you don't want to, there are interpreters and REPLs available as well.
You don't need to wait for long compile times in C++ if you don't want to, most folks use binary libraries, not every project is compiled from scratch, there are incremental compilers and linkers, REPLs like ROOT, managed versions with JIT like C++/CLI, and if using modern tooling like Visual C++ or Live++, hot code reloading.
The C standard makes provisions for compiler implementers which absolve them from responsibility of ignoring the complexity of the C language. Since most people never actually learn all the undefined behavior specified in the standard and compilers allow it, it might seem the language is simpler, but it's actually only compilers which are simpler.
You can argue that Rust generics are a trivial example of increased complexity vs the C language and I'd kinda agree: except the language would be cumbersome to use without them but with all the undefined C behavior defined. Complexity can't disappear, it can be moved around.
I like the Rust ADTs and the borrow checker, but I can't stand the syntax. I just wish it had Lisp syntax, but making it myself is far beyond my abilities.
At least apparent complexity. See "Expert C Programming: Deep C Secrets" which creeps up on you shockingly fast because C pretends to be simple by leaving things to be undefined but in the real life things need some kind of behavior.
IMO these are the major downsides of Rust in descending order of importance:
- Project leadership being at the whims of the moderators
- Language complexity
- Openly embracing 3rd party libraries and ecosystems for pretty much anything
- Having to rely on esoteric design choices to wrestle the compiler into using specific optimizations
- The community embracing absurd design complexity like implementing features via extension traits in code sections separated from both where the feature is going to be used and where either the structs and traits are implemented
- A community of zealots
I think the upsides easily outcompete the downsides, but I'd really wish it'd resolve some of these issues...
Rust makes explicit what the C standard says you can't ignore but it's up to you and not the compiler. Rust is a simpler and easier language than C in this sense.
That really depends what you want to do. All that security in Rust is only needed if there is a danger of hacks compromising the system.
The moment you start building something that's not exposed to the internet and hacking it has no implications, C beats it due to simplicity and speed of development .
> All that security in Rust is only needed if there is a danger of hacks compromising the system.
It's not just about security, it's about reliability too. If my program crashes because of a use-after-free or null pointer dereference, I'm going to be pissed off even if there aren't security implications.
I prefer Rust to C for all sorts of projects, even those that will never sit in front of a network.
C might beat Rust at simplicity and speed of development (don't know, I never developed in Rust) but I remember why I stopped developing in C about 30 years ago: the hundreds of inevitably bug ridden lines of C to build a CGI back then (malloc, free, strcpy, etc) vs little more than string slicing and "string" . "concatenation" in Perl and forget about everything else. That could have been Python (which I didn't know about,) or the languages there were born in those years: Ruby and PHP. Even Java was simpler to write. Runtime speed was seldom a problem even in the 90s. C programs are fast to run but they are not fast to develop.
It also depends on what you want to get away from.
I don't disagree that Rust might technically be a better option for a new project, but it's still a fairly fast moving language with an ecosystem that hasn't completely settled down. Many are increasingly turned off by the fast changing developer environments and ecosystems, and C provides you with a language and libraries that has already been around for decades and aren't likely to change much.
There are also so many programming concepts and ideas in Rust, which are all fine and useful in their own right, but they are a distraction if you don't need them. Some might say that you could just not use them, but they sneak up on you in third party libraries, code snippets, examples and suggestions from others.
Personally I find C a more cosy language, which is great for just enjoying programming for a bit.
Correctness is not just about security. And the threat environment to which a program may eventually be exposed is not always obvious up front.
Also, no: that's only true for some kinds of programs. Rust, c++, and go all have a much easier ecosystem for things like data structures and more complex libraries that make writing many programs much easier than in C.
The only place I find C still useful over one of the other three is embedded, mostly because of the ecosystem, and rust is catching up there also.
(This is somewhat ironic, because I teach a class in C. It remains a useful language when you want someone to quickly see the relationship between the line of code they wrote and the resulting assembly, but it's also fraught - undefined behavior lurks in many places and adds a lot of pain. I will one day switch the class to rust, but I inherited the C version and it takes a while.)
> much easier ecosystem for things like data structures and more complex libraries that make writing many programs much easier than in C.
So many people have implemented those data structures though, and they are available freely and openly, you can choose to your liking, i.e. ohash, or uthash, or khash, etc. and that is only for a hash table.
Those complex libraries are out there, too, for C, obviously.
The reason for why it is not in the standard library is obvious enough: there are many ways to implement those data structures, and there is no one size that fits all.
There are! But composability is easier in the languages that have generics/templates/etc. There's less passing around of function pointers and writing of custom comparator functions, using something like binary search or sort as an example, and the fact that those comparators can be inlined can often make the rust or C++ version faster than the "as simple to write" C version.
Obviously, all of these languages are capable of doing anything the others can. Turing complete is turing complete. But compare the experience of writing a multithreaded program that has, as part of it, an embedded HTTP server that provides statistics as it runs. It's painful in C, fairly annoying in C++ unless you match well to some existing framework, pretty straightforward in Rust, and kinda trivial in Go.
One comment talked about not using a (faster) B-Tree instead of a AVL-tree in C, because of the complexity (thus maintenance burden and risk of mistakes) it would add to the code.
Not really. Rustup only ships a limited number of toolchains, with some misses that (for me) are real head-scratchers. i686-unknown-none, for example. Can't get it from rustup. I'm sure there's a way to roll your own toolchain, but Rust's docs might as well tell you to piss up a rope for how much they talk about that.
Why is this important? C is the lingua franca of digital infrastructure. Whether that's due to merit or inertia is left as an exercise for the reader. I sure hope your new project isn't meant to supplant that legacy infrastructure, 'cause if it needs to run on legacy hardware, Rust won't work.
This is an incredibly annoying constraint when you're starting a new project, and Rust won't let you because you can't target the platform you need to target. For example, I spent hours building a Rust async runtime for Zephyr, only to discover it can't run on half the platforms Zephyr supports because Rust doesn't ship support for those platforms.
Going from mid-90s assembly to full stack dev/sec/ops, getting back to just a simple Borland editor with C or assembly code sounds like a lovely dream.
Your brain works a certain way, but you're forced to evolve into the nightmare half-done complex stacks we run these days, and it's just not the same job any more.
I like Nim- compiles to C so you get similarly close to the instructions and you can use a lot of high level features if you want to, but you can also stay close to the metal.
> Defensive programming all the way : all bugs are reduced to zero right from the start
Has it been fuzzed? Have you had someone who is very good at finding bugs in C code look at it carefully? It is understandable if the answer to one or both is "no". But we should be careful about the claims we make about code.
I'm kinda in the opposite camp. After doing a bunch of VB in my teens and tweens, I finally learned Java, C, and C++ in college, settling on mostly C for personal and professional projects. I became a core developer of Xfce and worked on that for 5 years.
Then I moved into backend development, where I was doing all Java, Scala, and Python. It was... dare I say... easy! Sure, these kinds of languages bring with them other problems, but I loved batteries-included standard libraries, build systems that could automatically fetch dependencies -- and oh my, such huge communities with open-source libraries for nearly anything I could imagine needing. Even if most of the build systems (maven, sbt, gradle, pip, etc.) have lots of rough edges, at least they exist.
Fast forward 12 years, and I find myself getting back in to Xfce. Ugh. C is such a pain in the ass. I keep reinventing wheels, because even if there's a third-party library, most of the time it's not packaged on many of the distros/OSes our users use. Memory leaks, NULL pointer dereferences, use-after-free, data races, terrible concurrency primitives, no tuples, no generics, primitive type system... I hate it.
I've been using Rust for other projects, and despite it being an objectively more difficult language to learn and use, I'm still much more productive in Rust than in C.
I think Rust is harder to learn, but once you grok it, I don't think it's harder to use, or at least to use correctly. It's hard to write correct C because the standard tooling doesn't give you much help beyond `-Wall`. Rust's normal error messages are delightfully helpful. For example, I just wrote some bad code and got:
I even had to cheat a little to get that far, because my editor used rust-analyzer to flag the error before I had the chance to build the code.Also, I highly recommend getting into the habit of running `cargo clippy` regularly. It's a wonderful tool for catching non-idiomatic code. I learned a lot from its suggestions on how I could improve my work.
I started programming with C a long time ago, and even now, every few months, I dream of going back to those roots. It was so simple. You wrote code, you knew roughly which instructions it translated to, and there you went!
Then I try actually going through the motions of writing a production-grade application in C and I realise why I left it behind all those years ago. There's just so much stuff one has to do on one's own, with no support from the computer. So many things that one has to get just right for it to work across edge cases and in the face of adversarial users.
If I had to pick up a low-level language today, it'd likely be Ada. Similar to C, but with much more help from the compiler with all sorts of things.
> I started programming with C a long time ago, and even now, every few months, I dream of going back to those roots. It was so simple. You wrote code, you knew roughly which instructions it translated to, and there you went!
Related-- I'm curious what percentage of Rust newbies "fighting the borrow checker" is due to the compiler being insufficiently sophisticated vs. the newbie not realizing they're trying to get Rust to compile a memory error.
If you come from C to Rust to basically have to rewire your brain. There are some corner cases that are wrong in Rust, but mostly you have to get used to a completely new way of thinking about object lifetimes and references to objects.
Yeah, back in the MS-DOS and Amiga glory days when C compilers were dumb, and anyone writing Assembly by hand could easily outperform them.
C source files for demoscene and games were glorified macro assemblers full of inline assembly.
> Similar to C, but with much more help from the compiler with all sorts of things.
Is that not the problem rust was created to solve?
Rust is more like C++ (though still not really) than like C. Rust is a complete re-imagination of what a systems language could be.
> You wrote code, you knew roughly which instructions it translated to, and there you went!
This must have been a very very long time ago, with optimizing compilers you don't really know even if they will emit any instructions.
On x86-type machines, you still have a decent chance, because the instructions themselves are so complicated and high-level. It's not that C is close to the metal, it's that the metal has come up to nearly the level of C!
I wouldn't dare guess what a compiler does to a RISC target.
(But yes, this was back in the early-to-mid 2000s I think. Whether that is a long time ago I don't know.)
> I wouldn't dare guess what a compiler does to a RISC target.
Just let your C(++) compiler generate assembly on an ARM-64 platform, like Apple Silicon or iOS. Fasten your seat belt.
I'd call it a while ago, but not a long time. Long time to me is more like 70s or 80s. I was born in 1996 so likely I'm biased: "before me=long time". It would be interesting to do a study on that. Give the words, request the years, correlate with birthyear, voila
Don't forget Pascal is still alive.
From what I remember about Ada, it is basically Pascal for rockets.
And some call it Boomer Rust, if I recall.
Hahaha! I'll start calling Rust "Zoomer Ada"
Also, COBOL and FORTRAN. FORTRAN is still being developed and one of the languages supported as first class citizen by MPI.
There's a big cloud of hype at the bleeding edge, but if you dare to look beyond that cloud, there are many boring and well matured technologies doing fine.
When Ada was first announced, I rushed to read about it -- sounded good. But so far, never had access to it.
So, now, after a long time, Ada is starting to catch on???
When Ada was first announced, back then, my favorite language was PL/I, mostly on CP67/CMS, i.e., IBM's first effort at interactive computing with a virtual machine on an IBM 360 instruction set. Wrote a little code to illustrate digital Fourier calculations, digital filtering, and power spectral estimation (statistics from the book by Blackman and Tukey). Showed the work to a Navy guy at the JHU/APL and, thus, got "sole source" on a bid for some such software. Later wrote some more PL/I to have 'compatible' replacements for three of the routines in the IBM SSP (scientific subroutine package) -- converted 2 from O(n^2) to O(n log(n)) and the third got better numerical accuracy from some Ford and Fulkerson work. Then wrote some code for the first fleet scheduling at FedEx -- the BOD had been worried that the scheduling would be too difficult, some equity funding was at stake, and my code satisfied the BOD, opened the funding, and saved FedEx. Later wrote some code that saved a big part of IBM's AI software YES/L1. Gee, liked PL/I!
When I started on the FedEx code, was still at Georgetown (teaching computing in the business school and working in the computer center) and in my appartment. So, called the local IBM office and ordered the PL/I Reference, Program Guide, and Execution Logic manuals. Soon they arrived, for free, via a local IBM sales rep highly curious why someone would want those manuals -- sign of something big?
Now? Microsoft's .NET. On Windows, why not??
> So, now, after a long time, Ada is starting to catch on???
Money and hardware requirements.
Finally there is a mature open source compiler, and our machines are light years beyond those beefy workstations required for Ada compilers in the 1980's.
I fully understand that sentiment. For several years now, I have also felt the strong urge to develop something in pure C. My main language is C++, but I have noticed over and over again that I really enjoy using the old C libraries - the interfaces are just so simple and basic, there is no fluff. When I develop methods in pure C, I always enjoy that I can concentrate 100% on algorithmic aspects instead of architectural decisions which I only have to decide on because of the complexity of the language (C++, Rust). To me, C is so attractive because it is so powerful, yet so simple that you can hold all the language features in your head without difficulty.
I also like that C forces me to do stuff myself. It doesn't hide the magic and complexity. Also, my typical experience is that if you have to write your standard data structures on your own, you not only learn much more, but you also quickly see possibly performance improvements for your specific use case, that would have otherwise been hidden below several layers of library abstractions.
This has put me in a strange situation: everyone around me is always trying to use the latest feature of the newest C++ version, while I increasingly try to get rid of C++ features. A typical example I have encountered several times now is people using elaborate setups with std::string_view to avoid string copying, while exactly the same functionality could've been achieved by fewer code, using just a simple raw const char* pointer.
About 16 years ago I started working with a tech company that used "C++ as C", meaning they used a C++ compiler but wrote pretty much everything in C, with the exception of using classes, but more like Python data classes, with no polymorphism or inheritance, only composition. Their classes were not to hide, but to encapsulate. Over time, some C++ features were allowed, like lambdas, but in general we wrote data classed C - and it screamed, it was so fast. We did all our own memory management, yes, using C style mallocs, and the knowledge of what all the memory was doing significantly aided our optimizations, as we targeted to be running with on cache data and code as much as possible. The results were market leading, and the company's facial recognition continually lands in the top 5 algorithms at the annual NIST FR Vendor test.
Sounds like they know what they are doing. How is using c++ with only data classes different from using c with struct
Slightly better ergonomics I suppose. Member functions versus function pointers come to mind, as do references vs pointers (so you get to use . instead of ->)
Yeah, slightly better ergonomics. Although we could, we simply did not use function pointers, we used member functions from the data class the data sat inside. We really tried to not focus on the language and tools, but to focus on the application's needs in the context of the problem it solves. Basically, treat the tech as a means to an end, not as a goal in itself.
Namespaces are useful for wrapping disparate bits of C code, to get around namespace collisions during integration.
Try doing C with a garbage collector ... it's very liberating.
Do `#include <gc.h>` then just use `GC_malloc()` instead of `malloc()` and never free. And add `-lgc` to linking. It's already there on most systems these days, lots of things use it.
You can add some efficiency by `GC_free()` in cases where you're really really sure, but it's entirely optional, and adds a lot of danger. Using `GC_malloc_atomic()` also adds efficiency, especially for large objects, if you know for sure there will be no pointers in that object (e.g. a string, buffer, image etc).
There are weak pointers if you need them. And you can add finalizers for those rare cases where you need to close a file or network connection or something when an object is GCd, rather than knowing programmatically when to do it.
But simply using `GC_malloc()` instead of `malloc()` gets you a long long way.
You can also build Boehm GC as a full transparent `malloc()` replacement, and replacing `operator new()` in C++ too.
> Try doing C with a garbage collector ... it's very liberating.
> Do `#include <gc.h>` then just use `GC_malloc()` instead of `malloc()` and never free.
Even more liberating (and dangerous!): do not even malloc, just use variable length-arrays:
This style forces you to alloc the memory at the outermost scope where it is visible, which is a nice thing in itself (even if you use malloc).At first I really liked this idea, but then I realised the size of stack frames is quite limited, isn't it? So this would work for small data but perhaps not big data.
In theory, this is a compiler implementation detail. The compiler may chose to put large stacks in the heap, or to not even use a stack/heap system at all. The semantics of the language are independent of that.
In practice, stack sizes used to be quite limited and system-dependent. A modern linux system will give you several megabites of stack by default (128MB in my case, just checked in my linux mint 22 wilma). You can check it using "ulimit -all", and you can change it for your child processes using "ulimit -s SIZE_IN_KB". This is useful for your personal usage, but may pose problems when distributing your program, as you'll need to set the environment where your program runs, which may be difficult or impossible. There's no ergonomical way to do that from inside your C program, that I know of.
I think one of the nice things about C is that since the language was not designed to abstract e.g.: heap is that it is really easy to replace manual memory management with GC or any other approach to manage memory, because most APIs expects to be called with `malloc()` when heap allocation is needed.
I think the only other language that has a similar property is Zig.
Odin has this too:
> Odin is a manual memory management based language. This means that Odin programmers must manage their own memory, allocations, and tracking. To aid with memory management, Odin has huge support for custom allocators, especially through the implicit context system.
https://odin-lang.org/docs/overview/#implicit-context-system
Interesting that I was thinking of a language that combined Zig and Scala to allocate memory using implicits and this looks exactly what I was thinking.
Not that I actually think this is a good idea (I think the explicitly style of Zig is better), but it is an idea nonetheless.
Which GC is that you’re using in these examples?
I'm not OP but the most popular C GC is Boehm's: https://www.hboehm.info/gc/
Most of the embedded world is still C, if you want to write C that's probably the place to find a community.
I also like that C forces me to do stuff myself
I never liked that you have to choose between this and C++ though. C could use some automation, but that's C++ in "C with classes" mode. The sad thing is, you can't convince other people to use this mode, so all you have is either raw C interfaces which you have to wrap yourself, or C++ interfaces which require galaxy brain to fully grasp.
I remember growing really tired of "add member - add initializer - add finalizer - sweep and recheck finalizers" loop. Or calculating lifetime orders in your mind. If you ask which single word my mind associates with C, it will be "routine".
C++ would be amazing if its culture wasn't so obsessed with needless complexity. We had a local joke back then: every C++ programmer writes heaps of C++ code to pretend that the final page of code is not C++.
I completely agree with this sentiment. That's why I wrote Datoviz [1] almost entirely in C. I use C++ only when necessary, such as when relying on a C++ dependency or working with slightly more complex data structures. But I love C’s simplicity. Without OOP, architectural decisions become straightforward: what data should go in my structs, and what functions do I need? That’s it.
The most inconvenient aspect for me is manual memory management, but it’s not too bad as long as you’re not dealing with text or complex data structures.
[1] https://datoviz.org/
> A typical example I have encountered several times now is people using elaborate setups with std::string_view to avoid string copying, while exactly the same functionality could've been achieved by fewer code, using just a simple raw const char* pointer.
C++ can avoid string copies by passing `const string&` instead of by value. Presumably you're also passing around a subset of the string, and you're doing bounds and null checks, e.g.
string_view is just a char* + len; which is what you should be passing around anyway.Funnily enough, the problem with string view is actually C api's, and this problem exists in C. Here's a perfect example: (I'm using fopen, but pretty much every C api has this problem).
> When I develop methods in pure C, I always enjoy that I can concentrate 100% on algorithmic aspects instead of architectural decisions which I only have to decide on because of the complexity of the languageI agree this is true when you develop _methods_, but I think this falls apart when you design programs. I find that you spend as much time thinking about memory management and pointer safety as you do algorithmic aspects, and not in a good way. Meanwhile, with C++, go and Rust, I think about lifetimes, ownership and data flow.
Variety is good. I got so used to working in pure C and older C++ that for a personal project I just started writing in C, until I realised that I don't have to consider other people and compatibility, so I had a lot of fun trying new things.
C was my first language and I quickly wrote my first console apps and a small game with Allegro. It feels incredibly simple in some aspects. I wouldn’t want to go back though. The build tools and managing dependencies feels outdated, somehow there is always a problem somewhere. Includes and the macro system feels crude. It’s easy to invoke undefined behavior and only realizing later because a different compiler version or flag now optimizes differently. Zig is my new C, includes a C compiler and I can just import C headers and use it without wrapper. Comptime is awesome. Build tool, dependency management and testing included. Cross compilation is easy. Just looks like a modern version of C. If you can live with a language that is still in development I would strongly suggest to take a look.
Otherwise I use Go if a GC is acceptable and I want a simple language or Rust if I really need performance and safety.
I sometimes write C recreationally. The real problem I have with it is that it's overly laborious for the boring parts (e.g. spelling out inductive datatypes). If you imagine that a large amount of writing a compiler (or similar) in C amounts to juggling tagged unions (allocating, pattern matching over, etc.), it's very tiring to write the same boilerplate again and again. I've considered writing a generator to alleviate much of the tedium, but haven't bothered to do it yet. I've also considered developing C projects by appealing to an embeddable language for prototyping (like Python, Lua, Scheme, etc.), and then committing the implementation to C after I'm content with it (otherwise, the burden of implementation is simply too high).
It's difficult because I do believe there's an aesthetic appeal in doing certain one-off projects in C: compiled size, speed of compilation, the sense of accomplishment, etc. but a lot of it is just tedious grunt work.
Despite what some people religiously think about programming languages, imo C was so successful because it is practical.
Yes it is unsafe and you can do absurd things. But it also doesn't get in the way of just doing what you want to do.
I don't think C was successful. It still is! What other language from the 70s is still under the top 5 languages?
https://www.tiobe.com/tiobe-index/
No, it's because of Unix and AT&T monopoly.
How was AT&T’s monopoly a driver? It’s not like they forced anyone to use UNIX.
If you want to do microcontroller/embedded, I think C it still the overall best choice, supported by vendors. Rust and Ada are probably slowly catching up.
Sounds a bit like perl but at a lower level ?
You can certainly do entirely absurd things in Perl. But it is a lot easier / safer work with. You get / can get a wealth of information when you the wrong thing in Perl.
With C segmentation fault is not always easy to pinpoint.
However the tooling for C, with sone if the IDEs out there you can set breakpoints/ walk through the code in a debugger, spot more errors during compile time.
There is a debugger included with Perk but after trying to use it a few times I have given up on it.
Give me C and Visual Studio when I need debugging.
On the positive side, shooting yourself in the foot with C is a common occurrence.
I have never had a segmentation fault in Perl. Nor have I had any problems managing the memory, the garbage collector appears to work well. (at least for my needs)
Eh Segfaults are like the easiest error to debug, they almost always tell you exactly where the problem is.
Sounds a bit like JavaScript, but at a tower level?
I wouldn’t compare them, C is very simple.
Yes, but there are similarities, it has the same hacker mind set imo.
Here's what kc3 code looks like (taken from [1]):
[1] https://git.kmx.io/kc3-lang/kc3/_tree/master/httpd/page/app/...So nobody would use code written in common lisp... but they will use code written in an entirely new language.... right...
> Virtual machines still suck a lot of CPU and bandwidth for nothing but emulation. Containers in Linux with cgroups are still full of RCE (remote command execution) and priviledge escalation. New ones are discovered each year. The first report I got on those listed 10 or more RCE + PE (remote root on the machine). Remote root can also escape VMs probably also.
A proper virtual machine is extremely difficult to break out of (but it can still happen [1]). Containers are a lot easier to break out of. I virtual machines were more efficient in either CPU or RAM, I would want to use them more, but it's the worst of both.
[1] https://www.zerodayinitiative.com/advisories/ZDI-23-982/
This reads like a cautionary tale about getting nerdsniped, without a happy ending.
C, or more precisely a constrained C++ is my go to language for side projects.
Just pick the right projects and the language shines.
I've tried, but never succeeded in doing that; the complexity eventually seeps in through the cracks.
C++'s stdlib contains a lot of convenient features, writing them myself and pretending they aren't there is very difficult.
Disabling exceptions is possible, but will come back to bite you the second you want to pull in external code.
You also lose some of the flexibility of C, unions become more complicated, struct offsets/C style polymorphism isn't even possible if I remember correctly.
I love the idea though :)
> C++'s stdlib contains a lot of convenient features, writing them myself and pretending they aren't there is very difficult.
I've never understood the motivation behind writing something in C++, but avoiding the standard library. Sure, it's possible to do, but to me, they are inseparable. The basic data types and algorithms provided by the standard library are major reasons to choose the language. They are relatively lightweight and memory-efficient. They are easy to include and link into your program. They are well understood by other C++ programmers--no training required. Throughout my career, I've had to work in places where they had a "No Standard Library" rule, but that just meant they implemented their own, and in all cases the custom library was worse. (Also, none of the companies could articulate a reason for why they chose to re-implement the standard library poorly--It was always blamed on some graybeard who left the company decades ago.)
Choosing C++ without the standard library seems like going skiing, but deliberately using only one ski.
The stdlib makes choices that might not be optimal for everyone.
Plenty of code bases also predate it, when I started coding C++ in 1995 most people were still rolling their own.
I'm on the same boat, i now use exclusively C, and D (with the -betterC flag) for my own projects
I refuse to touch anything else, but i keep an eye on the new languages that are being worked on, Zig for example
Try zig, it is C with a bit of polish.
Why zig and not Rust? Just to throw the question out there :-)
Zig is a much simpler language than Rust. I'm a big Rust fan, but Rust is not even close to a drop-in replacement for C. It has a steep learning curve, and often requires thinking about and architecting your program much differently from how you might if you were using C.
For a C programmer, learning and becoming productive in Zig should be a much easier proposition than doing the same for Rust. You're not going to get the same safety guarantees you'd get with Rust, but the world is full of trade offs, and this is just one of them.
So this is a journey where starting in ruby, going through an SICP phase, and then eventually compromising that it isn't viable. it kinda seems like C is just the personal compromise of trying to maintain nerdiness rather than any specific performance needs.
I think it's a pretty normal pattern I've seen (and been though) of learning-oriented development rather than thoughtful engineering.
But personally, AI coding has pushed me full circle back to ruby. Who wants to mentally interpret generated C code which could have optimisations and could also have fancy looking bugs. Why would anyone want to try disambiguating those when they could just read ruby like English?
> But personally, AI coding has pushed me full circle back to ruby.
This happened to me too. I’m using Python in a project right now purely because it’s easier for the AI to generate and easier for me to verify. AI coding saves me a lot of time, but the code is such low quality there’s no way I’d ever trust it to generate C.
> AI coding saves me a lot of time, but the code is such low quality
Given that low quality code is perhaps the biggest time-sink relating to our work, I'm struggling to reconcile these statements?
It depends on what you need the code for. If it’s something mission critical, then using AI is likely going to take more time than it saves, but for a MVP or something where quality is less important than time to market, it’s a great time saver.
Also there’s often a spectrum of importance even within a project, eg maybe some internal tools aren’t so important vs a user facing thing. Complexity also varies: AI is pretty good at simple CRUD endpoints, and it’s a lot faster than me at writing HTML/CSS UI’s (ie the layout and styling, without the logic).
If you can isolate the AI code to code that doesn’t need to be high quality, and write the code that doesn’t yourself, it can be a big win. Or if you use AI for an MVP that will be incrementally replaced by higher quality code if the MVP succeeds, can be quite valuable since it allows you to test ideas quicker.
I personally find it to be a big win, even though I also spend a lot of time fighting the AI. But I wouldn’t want to build on top of AI code without cleaning it up myself.
There are also some tasks I’ve learned to just do myself: eg I do not let the AI decide my data model/database schema. Data is too important to leave it up to an AI to decide. Also outside of simple CRUD operations, it generates quite inefficient database querying so if it’s on a critical path, perhaps write the queries yourself.
As many people have already said, for starting a new project Rust beats C in every way
Rust is not free of trade offs and you're not helping the cause the way you think you are.
Just a few off the top:
- Rust is a much more complex language than C
- Rust has a much, much slower compiler than pretty much any language out there
- Rust takes most people far longer to "feel" productive
- Rust applications are sometimes (often?) slower than comparable C applications
- Rust applications are sometimes (often?) larger than comparable C applications
You may not value these things, or you may value other things more.
That's completely fine, but please don't pretend as if Rust makes zero trade offs in exchange for the safety that people seem to value so much.
Many of these are false?
* Rust is vastly easier to get started with as a new programmer than C or C++. The quality and availability of documentation, tutorials, tooling, ease of installation, ease of dependency management, ease of writing tests, etc. Learning C basically requires learning make / cmake / meson on top of the language, and maybe Valgrind and the various sanitizers too. C's "simplicity" is not always helpful to someone getting started.
* The Rust compiler isn't particularly slow. LLVM is slow. Monomorphization hurts the language, but any other language that made the same tradeoff would see the same problems. The compiler has also gotten much much faster in the last few years and switching linkers or compiler backends makes a huge difference.
* Orgs that have studied tracked this don't find Rust to be less productive. Within a couple of months programmers tend to be just as if not more productive than they were previously with other languages. The ramp-up is probably slower than, say, Go, but it's not Scala / Haskell. And again, the tooling & built in test framework really helps with productivity.
* Rust applications are very rarely slower than comparable C applications
* Rust applications do tend to be larger than comparable C applications, but largely because of static vs. dynamic linking and larger debuginfo.
> helping the cause
Rust evangelism is probably the worst part of Rust. Shallow comments stating Rust’s superiority read to me like somebody who wants to tell me about Jesus.
it's not unique for Rust, C/C++ devs probably aren't just used to it, since there hasn't been anything major new for decades.
If you already dislike this, I ask you to read C-evangelism with respect to the recent Linux drama about Rust in Linux.
Jesus wasn't written in Rust? Sounds like a recipe for UB if you ask me.
That's very funny, Jesus was pretty much undefined behavior personified from the perspective of the state/church.
Not to mention, modern CPUs have essentially been designed to make C code run as fast as possible.
Definitely not true. One look at what a modern C compiler does to optimize the code you give it will disabuse you of that notion.
There's nothing special or magic about C code, and, if anything, C has moved further and further away from its "portable assembler" moniker over time. And compilers can emit very similar machine instructions for the same type of algorithm regardless of whether you're writing C, Rust, Go, Zig, etc.
Consider, for example, that clang/LLVM doesn't even really compile C. The C is first translated into LLVM's IR, which is then used to emit machine instructions.
I haven't designed any CPUs myself, someone with more experience could give you more details.
But I don't think this carries much weight anymore, might have been true way back in the days.
C gives you more control, which means it's possible to go faster if you know exactly what you're doing.
> Rust is a much more complex language than C
Feature wise, yes. C forces you to keep a lot of irreducible complexity in your head.
> Rust has a much, much slower compiler than pretty much any language out there
True. But it doesn't matter much in my opinion. A decent PC should be able to grind any Rust project in few seconds.
> Rust applications are sometimes
Sometimes is a weasel word. C is sometimes slower than Java.
> Rust takes most people far longer to "feel" productive
C takes me more time to feel productive. I have to write code, then unit test, then property tests, then run valgrind, check ubsan is on. Make more tests. Do property testing, then fuzz testing.
Or I can write same stuff in Rust and run tests. Run miri and bigger test suite if I'm using unsafe. Maybe fuzz test.
> A decent PC should be able to grind any Rust project in few seconds.
That is demonstrably false, unless your definition of "decent PC" is something that costs $4000.
I love Rust, but saying misleading (at best) things about build times is not a way to evangelize.
Real projects get into the millions of lines of code, Rust will not scale to compile that quickly.
Not quickly, no. But neither does C++ (how long does it take to compile Clang?) and people manage fine.
Faster would obviously be better, but it's not big enough of a deal to cancel out all the advantages compared to C.
I remember a project that used boost for very few things, but it included a single boost header in almost every file. That one boost header absolutely inflated the build times to insane levels.
Good for you. Like the grandparent commenter said, for others these tradeoffs might be important. E.g.:
> I am disappointed with how poorly Rust's build scales, even with the incremental test-utf-8 benchmark which shouldn't be affected that much by adding unrelated files. (...)
> I decided to not port the rest of quick-lint-js to Rust. But... if build times improve significantly, I will change my mind!
https://quick-lint-js.com/blog/cpp-vs-rust-build-times/
> Good for you. > https://quick-lint-js.com/blog/cpp-vs-rust-build-times/
Look you're picking a memory unsafe language versus a safe one. Whatever meager gains you save on compilation times (and the link shows the difference is meager if you aren't on a MacOS, which I'm not) will be obliterated by losses in figuring out which UB nasal demon was accidentally released.
This is like that argument that dynamic types save time, because you can catch error in tests. But then have to write more tests to compensate, so you lose time overall.
> C takes me more time to feel productive. I have to write code, then unit test, then property tests, then run valgrind, check ubsan is on. Make more tests. Do property testing, then fuzz testing.
So … make && make check ?
"How to install and use "make" in Windows?"
https://stackoverflow.com/questions/32127524/how-to-install-...
That's fair, but to me what drags C and C++ really down for me is the difficulty in building them. As I get older I just want to write the code and not mess with makefiles or CMake. I don't want starting a new project to be a "commitment" that requires me to sit down for two hours.
For me Rust isn't really competing against unchecked C. It's competing against Java and boy does the JVM suck outside of server deployments. C gets disqualified from the beginning, so what you're complaining about falls on deaf ears.
I'm personally suffering the consequences of "fast" C code every day. There are days where 30 minutes of my time are being wasted on waiting for antivirus software. Thinks that ought to take 2 seconds take 2 minutes. What's crazy is that in a world filled with C programs, you can't say with a good conscience that antivirus software is unnecessary.
> That's fair, but to me what drags C and C++ really down for me is the difficulty in building them. As I get older I just want to write the code and not mess with makefiles or CMake. I don't want starting a new project to be a "commitment" that requires me to sit down for two hours.
Also, integrating 3rd party code has always been one of the worst parts of writing a C or C++ program. This 3p library uses Autoconf/Automake, that one uses CMake, the other one just ships with a Visual Studio .sln file... I want to integrate them all into my own code base with one build system. That is going to be a few hours or days of sitting there and figuring out which .c and .h files need to be considered, where they are, what build flags and -Ddefines are needed, how the build configuration translates into the right build flags and so on.
On more modern languages, that whole drama is done with pip install or cargo install.
completely not!
(And yes, I was considering if I should shout in capslock ;) )
I have seen so many fresh starts in Rust that went great during week 1 and 2 and then they collided with the lifetime annotations and then things very quickly got very messy. Let's store a texture pointer created from an OpenGL context based on in-memory data into a HashMap...
impl<'tex,'gl,'data,'key> GlyphCache<'a> {
Yay? And then your hashmap .or_insert_with fails due to lifetime checks so you need a match on the hashmap entry and now you're doing the key search twice and performance is significantly worse than in C.
Or you need to add a library. In C that's #include and a -l linker flag. In Rust, you now need to work through this:
https://doc.rust-lang.org/cargo/reference/manifest.html
to get a valid Cargo.toml. And make sure you don't name it cargo.toml, or stuff will randomly break.
> Or you need to add a library. In C that's #include and a -l linker flag. In Rust, you now need to work through [link to cargo docs]
This is just bizarre to me, the claim that dependency management is easier in C projects than in Rust. It is incredibly rare that adding a dependency to a C project is just an #include and -l flag away. What decent-sized project doesn't use autotools or cmake or meson or whatever? Adding a dependency to any of those build systems is more work than adding a single, short line to Cargo.toml.
And even if you are just using a hand-crafted makefile (no thank you, for any kind of non-trivial, cross-platform project), how do you know that dependency is present on the system? You're basically just ignoring that problem and forcing your users to deal with it.
You don’t need to work through that, you can follow https://doc.rust-lang.org/cargo/reference/build-script-examp... and it shows you how.
Adding `foo = "*"` to Cargo.toml is as easy as adding `-l foo` to Makefile.
The best feature of C is the inconvenience of managing dependencies. This encourages a healthy mistrust of third-party code. Rust is unfortunately bundled with an excellent package manager, so it's already well on its way to NPM-style dependency hell.
Can't help but agree, as much as I prefer Rust over C.
On the other hand, C definitely goes too far in to the opposite extreme. I am very tired of reinventing wheels in C because integrating third-party dependencies is even more annoying than writing and maintaining my own versions of common routines.
It's also very mature, not so much of a moving target.
Both aspects are something I think many developers grow to appreciate eventually.
Rust has three major issues:
- compile times
- compile times
- compile times
Not a problem for small utilities, but once you start pulling dependencies... pain is felt.
Long compile times with Rust don't really bother me that much. If it's someone else's program that I just want to build and run for myself, the one-time hit of building it isn't a big deal. I can be patient.
If it's something I'm actively developing, the compile is incremental, so it doesn't take that long.
What does often take longer than I'd like is linking. I need to look into those tricks where you build all the infrequently-changing bits (like third-party dependent crates) into a shared library, and then linking is very quick. For debug builds, this could speed up my development cycle quite a bit.
Long compile time isn't a new issue for language with advanced features. Before Rust, it was Haskell. And before Haskell, it was C++.
And implementation wise, probably there's something to do with LLVM.
It is when the root cause is tooling, not language features.
You don't need to wait for long compile times in Haskell if you don't want to, there are interpreters and REPLs available as well.
You don't need to wait for long compile times in C++ if you don't want to, most folks use binary libraries, not every project is compiled from scratch, there are incremental compilers and linkers, REPLs like ROOT, managed versions with JIT like C++/CLI, and if using modern tooling like Visual C++ or Live++, hot code reloading.
Compile time is also my top three major issues with C++, in a list that also includes memory safety.
Compared to C I'd say the biggest issue is complexity, of which compile time is a consequence.
The C standard makes provisions for compiler implementers which absolve them from responsibility of ignoring the complexity of the C language. Since most people never actually learn all the undefined behavior specified in the standard and compilers allow it, it might seem the language is simpler, but it's actually only compilers which are simpler.
You can argue that Rust generics are a trivial example of increased complexity vs the C language and I'd kinda agree: except the language would be cumbersome to use without them but with all the undefined C behavior defined. Complexity can't disappear, it can be moved around.
True, if C wanted to be Rust it would be just as complicated.
But who cares?
The fact that C chooses not to nail everything down makes it a simpler and more flexible language, which is why it's sometimes preferred.
Undefined behavior does not make things simpler.
I like the Rust ADTs and the borrow checker, but I can't stand the syntax. I just wish it had Lisp syntax, but making it myself is far beyond my abilities.
Except complexity of language
At least apparent complexity. See "Expert C Programming: Deep C Secrets" which creeps up on you shockingly fast because C pretends to be simple by leaving things to be undefined but in the real life things need some kind of behavior.
IMO these are the major downsides of Rust in descending order of importance:
- Project leadership being at the whims of the moderators
- Language complexity
- Openly embracing 3rd party libraries and ecosystems for pretty much anything
- Having to rely on esoteric design choices to wrestle the compiler into using specific optimizations
- The community embracing absurd design complexity like implementing features via extension traits in code sections separated from both where the feature is going to be used and where either the structs and traits are implemented
- A community of zealots
I think the upsides easily outcompete the downsides, but I'd really wish it'd resolve some of these issues...
I'll take good complexity over bad simplicity any day.
You can ignore most of the complexity that's not inherent to the program you're trying to write.
The difference is C also lets you ignore the inherent complexity, and that's where bugs and vulnerabilities come from.
Rust makes explicit what the C standard says you can't ignore but it's up to you and not the compiler. Rust is a simpler and easier language than C in this sense.
I would use Mojo - you get the type and memory safety of Rust, the simplicity of Python and the performance of C/C++.
> simplicity of Python
Python isn’t simple, it’s a very complex language. And Mojo aims to be a superset of Python - if it’s simple, that’s only because it’s incomplete.
Not even close to true, may I ask how much experience you have with C (not C++)?
That really depends what you want to do. All that security in Rust is only needed if there is a danger of hacks compromising the system.
The moment you start building something that's not exposed to the internet and hacking it has no implications, C beats it due to simplicity and speed of development .
> All that security in Rust is only needed if there is a danger of hacks compromising the system.
It's not just about security, it's about reliability too. If my program crashes because of a use-after-free or null pointer dereference, I'm going to be pissed off even if there aren't security implications.
I prefer Rust to C for all sorts of projects, even those that will never sit in front of a network.
C might beat Rust at simplicity and speed of development (don't know, I never developed in Rust) but I remember why I stopped developing in C about 30 years ago: the hundreds of inevitably bug ridden lines of C to build a CGI back then (malloc, free, strcpy, etc) vs little more than string slicing and "string" . "concatenation" in Perl and forget about everything else. That could have been Python (which I didn't know about,) or the languages there were born in those years: Ruby and PHP. Even Java was simpler to write. Runtime speed was seldom a problem even in the 90s. C programs are fast to run but they are not fast to develop.
It also depends on what you want to get away from.
I don't disagree that Rust might technically be a better option for a new project, but it's still a fairly fast moving language with an ecosystem that hasn't completely settled down. Many are increasingly turned off by the fast changing developer environments and ecosystems, and C provides you with a language and libraries that has already been around for decades and aren't likely to change much.
There are also so many programming concepts and ideas in Rust, which are all fine and useful in their own right, but they are a distraction if you don't need them. Some might say that you could just not use them, but they sneak up on you in third party libraries, code snippets, examples and suggestions from others.
Personally I find C a more cosy language, which is great for just enjoying programming for a bit.
Correctness is not just about security. And the threat environment to which a program may eventually be exposed is not always obvious up front.
Also, no: that's only true for some kinds of programs. Rust, c++, and go all have a much easier ecosystem for things like data structures and more complex libraries that make writing many programs much easier than in C.
The only place I find C still useful over one of the other three is embedded, mostly because of the ecosystem, and rust is catching up there also.
(This is somewhat ironic, because I teach a class in C. It remains a useful language when you want someone to quickly see the relationship between the line of code they wrote and the resulting assembly, but it's also fraught - undefined behavior lurks in many places and adds a lot of pain. I will one day switch the class to rust, but I inherited the C version and it takes a while.)
> much easier ecosystem for things like data structures and more complex libraries that make writing many programs much easier than in C.
So many people have implemented those data structures though, and they are available freely and openly, you can choose to your liking, i.e. ohash, or uthash, or khash, etc. and that is only for a hash table.
Those complex libraries are out there, too, for C, obviously.
The reason for why it is not in the standard library is obvious enough: there are many ways to implement those data structures, and there is no one size that fits all.
There are! But composability is easier in the languages that have generics/templates/etc. There's less passing around of function pointers and writing of custom comparator functions, using something like binary search or sort as an example, and the fact that those comparators can be inlined can often make the rust or C++ version faster than the "as simple to write" C version.
Obviously, all of these languages are capable of doing anything the others can. Turing complete is turing complete. But compare the experience of writing a multithreaded program that has, as part of it, an embedded HTTP server that provides statistics as it runs. It's painful in C, fairly annoying in C++ unless you match well to some existing framework, pretty straightforward in Rust, and kinda trivial in Go.
When it comes to multithreaded programs, I much prefer Go over C, too. :)
I followed the discussion about Rust in Linux.
One comment talked about not using a (faster) B-Tree instead of a AVL-tree in C, because of the complexity (thus maintenance burden and risk of mistakes) it would add to the code.
They were happy to use a B-Tree in Rust though
> All that security in Rust is only needed if there is a danger of hacks compromising the system.
Rust's safety features help prevent a large class of bugs. Security issues are only one kind of bug.
> C beats it due to simplicity and speed of development
C being faster to develop than Rust is a ludicrous claim.
Not really. Rustup only ships a limited number of toolchains, with some misses that (for me) are real head-scratchers. i686-unknown-none, for example. Can't get it from rustup. I'm sure there's a way to roll your own toolchain, but Rust's docs might as well tell you to piss up a rope for how much they talk about that.
Why is this important? C is the lingua franca of digital infrastructure. Whether that's due to merit or inertia is left as an exercise for the reader. I sure hope your new project isn't meant to supplant that legacy infrastructure, 'cause if it needs to run on legacy hardware, Rust won't work.
This is an incredibly annoying constraint when you're starting a new project, and Rust won't let you because you can't target the platform you need to target. For example, I spent hours building a Rust async runtime for Zephyr, only to discover it can't run on half the platforms Zephyr supports because Rust doesn't ship support for those platforms.
Going from mid-90s assembly to full stack dev/sec/ops, getting back to just a simple Borland editor with C or assembly code sounds like a lovely dream.
Your brain works a certain way, but you're forced to evolve into the nightmare half-done complex stacks we run these days, and it's just not the same job any more.
I like Nim- compiles to C so you get similarly close to the instructions and you can use a lot of high level features if you want to, but you can also stay close to the metal.
The author's github profile: https://github.com/thodg
The way he writes about his work in this article, I think he's a true master. Very impressive to see people with such passion and skill.
> Defensive programming all the way : all bugs are reduced to zero right from the start
Has it been fuzzed? Have you had someone who is very good at finding bugs in C code look at it carefully? It is understandable if the answer to one or both is "no". But we should be careful about the claims we make about code.
At this point one should choose a C-like subset of Rust, if they have this particular urge. A lot fewer rakes under the leaves.
I've read through your website and thinking processes.
Your work is genius! I hope KC3 can be adopted widely, there is great potential.
504 Gateway Timeout
Archived at https://archive.is/zIZ8S
Maybe the moral here is learning Lisp made him a better C programmer.
Could he have jumped right into C and had amazing results, if not for the Journey learning Lisp and changing how he thought of programming.
Maybe learning Lisp is how to learn to program. Then other languages become better by virtue of how someone structures the logic.
The point is that much of the defensive programming you would have to do in C is unnecessary and automatic in Rust.
There's much more to defensive programming than avoiding double frees and overflows.
Yeah and Rust enables much more defensive programming than just avoiding double frees and overflows.
much != all
[flagged]