I understand Boeing is making a "software patch" to fix the problem.
Anytime I have to rely on computer code to save my hide, I will politely refuse.
That ship has long since sailed, you are, right now. Nearly every new airplane (and a lot of older ones) is fly-by-wire or otherwise dependent on software for its basic function. You worked on a program upon which human civilization currently depends, it is fly-by-wire.
Software adds complexity, but is absolutely necessary for many functions that couldn't practically be done otherwise. The mere fact that we are having a discussion about it, here, is evidence of that.
The problem is that it permits systems of such byzantine complexity that when something unexpected happens, you require people to be far more expert at troubleshooting than they would have otherwise. You might recall what happened when one individual shifted one word, one bit, to the left, when trying to set a clock, that confounded one of the smartest people I have ever known for several hours.
Howard's earlier comment is where the debate lies. "If a pilot did something wrong, we tried to figure out how to prevent that action" - that sort of reasoning is *why* these things tend to snowball, because if you are not exceedingly careful about what you are "protecting" against, you create the sort of issue seen here. Enough of this, and you require the pilots or other operators to understand how to out-think an exceedingly complex system, in real time, while headed towards the ground with 100 people on board. Of all people, *Trump* - not notable for his engineering expertise - put his finger on it the first day, saying he didn't want his pilots to have to be Einstein to figure it out.
A lot the other comments (here and on SSW) were along the lines of "well, it it looks like there is a problem, disable the system (by turning off the jackscrew motor, presumably at the breaker), we all knew about it" are overly simplistic. The MCAS system is one possible issue of a *vast array* of other issues that could cause controllability problems. You can't "train" people to avoid this sort of problem in the usual sense of the word, it does not necessarily yield to procedures and checklists. You frequently have to know, in extreme detail, exactly how *all* of the system features work, in much more detail, and how to perform troubleshooting on these potentially very complex systems.
I have done a lot of things in the aerospace business, and can do any type of task required of someone in my field from data entry to complex non-linear analysis. But I have made my reputation by being one of the guys who can unwind these extremely complex systems from minimal data.
I am always the one calling for *simplification* to the maximum extent possible, because it's almost absurdly easy to put the troubleshooting out of the range of any but the most accomplished experts. And I assure you, this appears to be an exceedingly rare skill that cannot practically be educated or trained into someone, no matter what you do. No one want to pay these guys what you would have to, to hang around for endless hours for the relatively rare occasions they are required, and the experts don't want to sit in a jumpseat on an Indonesian 737 commuter flight for endless hours waiting around for something to happen.
Because of that, the engineers should all be *very hesitant* to design something that requires special processing to overcome a more fundamental issue, and they should all be able to be safely controlled by hitting the "turn off all enhancements" switch, a big red switch on the control yoke, and flying the airplane.
I don't actually know what the deal is here or whether the speculation to date is correct or not. But everyone designing these things should realize the essential nature of how it will be used and who will be using it.
Brett