'That *******' has earnt any 'superior smirk' you may perceive!I dunno -- Knuth, who deserves far more accolades IMHO (let's face it, C++ was nothing new or particularly inventive at the time), seems to be able to write about his work very well without coming across that way. I think Stroustrup is just particularly full of himself.
Books about languages (and libraries) are for programmers and should be written by programmers - even better if it's the actual designer (once the reader is past beginner level, anyway). His D&E (http://www2.research.att.com/~bs/dne.html) book is far more interesting than the reference manual, and was a great help to me trying to master the language.
Who cares about pointers anyway? Hardware is cheaper than software so I say use a modern app framework with garbage collection and scale up or scale out if needed. You might just even ship on time with fewer bugs. Fuggedaboutit!
Not when you're programming a machine control running on a real-time OS. Need to have pretty stable and predictable responses.
Plus I'm somewhat dubious about the benefits of GC. Pointers and manual memory management isn't that hard -- if you can't understand it you probably shouldn't be in the line of work. I spent that much time trying to track down memory leaks in a (garbage collected) Adobe Flex app (yikes), and they're rather insidious things. Not freeing a block of memory in C/C++ because you forgot to call free/delete is a relatively easy thing to find. But having to use a profiler to track down where in a bunch of black-box code a reference is being retained is far more of a hassle and seems to happen almost as much.
So you both think there are better alternatives to C++ ... fine ... use them then! For many applications they might be more suitable, but the fact remains that C++ does what it was supposed to do, which wasn't to replace Smalltalk or anything else, rather it deliberately remains close to the hardware. That happens to suit what I do, pointers and all.Problem is, it's so popular, everybody uses it, commercial libraries are written in it, the entire industry has standardised on it. Which means as a programmer, one inevitably has to deal with it. So it pisses me off because it could've been so much better. C++0x is getting there but it's not enough.
I worked on a huge commercial Federally-regulated C++ software for 3 years, and have been working only in .NET for the past 3 years. I understand memory management...it's just not important to me. Memory leaks are not avoidable in a large C++ app, doesn't matter how good a coder you are. I never get them in .NET. And the amount of features my team adds in a week is greater than my old team would add in 3 months. Much fewer bugs and better quality too.
To give a Ripster-ish analogy, .NET, Java, Rails, etc. are to C++ what USB is to PS/2. =)
I studied CS, didn't necessarily want to end up code monkeying with C++ every day. It serves its purpose, but it's by no means beautiful or elegant and when you look at what else was available at the time (Jesus man, Smalltalk?), he could've done much better. I love the benefits that come with objected-oriented programming but something very important and beautiful was lost when we left C.There was already Objective-C -- a C with some Smalltalk thrown in ...
However, horse**** on the `Memory leaks are not avoidable in a large C++ app'. All it takes is discipline. Throwing a few tools at it like valgrind to check you're crossed your Is and dotted your Ts doesn't hurt either.If you have enough discipline, you could perhaps find all memory leaks and plug them, but it can sometimes be hard to do .. or even to see that you have a leak in your program.
Imagine you've got microsecond timing requirements and the .NET virtual machine decides to collect the garbage right in the middle of an execution loop. Game over!You can not have microsecond timing requirements on Windows. I have tried that. It is not a real-time OS. Forget about it.
There was already Objective-C -- a C with some Smalltalk thrown in ...Yes there's that :) The same reason we wouldn't have high performance four-stroke motorcycle engines if it weren't for Soichiro Honda being pissed off with two-strokes.
Many languages that have evolved have done so as a reaction to the shortcomings of C++. One of them is Java. It would not be as it is, if it hadn't been for C++.
If you have enough discipline, you could perhaps find all memory leaks and plug them, but it can sometimes be hard to do .. or even to see that you have a leak in your program.The easiest thing to do is not to create them in the first place -- sticking to rules like RAII helps here.
Another aspect of manual memory mgmt vs. GC is that in many cases you don't just manage memory objects, but many other types of resources as well: most often they are operating system objects of which you can only have at most N at once. If you manage resources manually, then you can often manage both memory and other resources the same way, but with GC you can't -- you still need to manage some resource manually. (sticking the deallocator in finalize() won't work, because you can not tell when it will be called)Quite -- I've found GC helps with the easy stuff, but often you still need to have a pretty good understanding of what's going on underneath. And having resource locking and management further obfuscated and hidden definitely doesn't help.
Valgrind works only on a low level and will only detect leaks of memory. I have worked with code that lacked a pthread_destroy_xx() call. The bug did not incur any memory leak, and there were no resource leakss under Linux, but resource leaks were (eventually) found when the program was ported to Windows, because the pthread implementation is different there.I get nervous when people say things like `oh well, it's POSIX, should be able to port it in a week or two' for those kinds of reasons :) You're right. Our fancy tools don't always save us -- that goes for both valgrind and GC :P
You can not have microsecond timing requirements on Windows. I have tried that. It is not a real-time OS. Forget about it.Yes I've heard Windows isn't too flash for that. We use RT Linux for that reason. 50KHz square wave out of a GPIO pin into a scope -- rock solid, no jitter, even when loading the thing up with all kinds of CPU and I/O bound other processes. That's only 20us of course but it's good enough.
For embedded systems, there are GC algorithms in the literature that are deterministic enough for hard realtime apps, but I don't know if they are available anywhere, or used for anything. They do have much overhead though.
Would be more interesting if more programmers had boobs.I know programmers with boobs.
You can not have microsecond timing requirements on Windows. I have tried that. It is not a real-time OS. Forget about it.So? Should people writing programs not intended to run under Windows have to use assembler language?
It's true enough that most Windows applications could be written in languages with garbage collection and other nice features. For the typical user, there's only one kind of program that actually takes the hardware's raw performance to its limit.We are not talking about raw performance. Realtime is about latency: being able to respond to input in a timely manner.
Programs like Duke Nukem.
Wibox, I think you forgot one of the best quote about C++:
I made up the term 'object-oriented', and I can tell you I didn't have C++ in mind
-- Alan Kay, OOPSLA '97
That quote is in there as well.
That being said; this thread is very entertaining even though I don't really know much about coding.
Problem is, it's so popular, everybody uses it, commercial libraries are written in it, the entire industry has standardised on it. Which means as a programmer, one inevitably has to deal with it. So it pisses me off because it could've been so much better. C++0x is getting there but it's not enough.
Firstly, the `you' in my post wasn't you per se -- it was a general `one'. I'm sure you can manage memory.
However, horse**** on the `Memory leaks are not avoidable in a large C++ app'. All it takes is discipline. Throwing a few tools at it like valgrind to check you're crossed your Is and dotted your Ts doesn't hurt either.
.NET has its place. But there's some things it will never replace. Controlling real-time machinery is one of them. Sure, Beckhoff and the like are bringing out tools to deal with .NET, but that's only one piece of the pie and eventually you're twiddling bits on a bus at a hardware level. To do that reliably you need C/C++.
Imagine you've got microsecond timing requirements and the .NET virtual machine decides to collect the garbage right in the middle of an execution loop. Game over!
And there's one fact that makes this all moot anyway -- for the new projects started today that should be written in .NET/Java/Ruby (Rails? eww :P), most of them (http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html) get written in C/C++ anyway because of momentum and general stubbornness (note that that link is of total lines of code -- I saw another paper a year or two ago where it reckoned C of all things topped the list by far of new projects started, but can't find it). So unfortunately while I'd certainly rather be doing GUI apps in .NET (if I used Windows, most of my work is on embedded Linux), chances are in most companies I'll end up using C/C++ because the architect said so.
The real question is what keyboard does IBM's Watson use? THAT is the future of programming.Show Image(http://farm6.static.flickr.com/5179/5578994713_354722817b_z.jpg)