There are a number of common misconceptions in software development surrounding the idea of concurrency. This has been coming for decades, and some of the issues have just been reinforced one more time in an otherwise interesting post in LinkedIn’s engineering blog that recommends their development framework.
Such issues may be observed throughout the post, but can be elucidated via this short paragraph:
As we saw with the Scala and JavaScript examples above, for very simple cases, the Evented (asynchronous) code is generally more complicated than Threaded (synchronous) code. However, in most real-world scenarios, you’ll have to make several I/O calls, and to make them fast, you’ll need to do them in parallel.
At a glance, this may look like a sane proposition. There’s agreement that an asynchronous API or framework is one that does not block the flow of execution when faced with a task that has a long or non-predictable deadline, and this coding style is harder for human beings to get right. For example, if you see code such as:
data = read(filename)
There’s less brain work to process and build on it than so called asynchronous logic such as:
read(filename, callback)
It’s also true that there are important interfaces that follow the asynchronous style to prevent resource waste. Some of these exist in the kernel I/O API.
So what’s the issue, then?
There are a few. The first one is the statement that to make I/O scale you have to do it in parallel. That’s clearly not true. Scalable I/O requires your program to not waste an irresponsible amount of memory and CPU per operation. This may be achieved with simple concurrent techniques, and concurrency is not parallelism.
This drives to the next point, which is the strong association between synchronous programming and threads. You can have synchronous programming, and its simplified mental model, without operating system threads. This can be done by having a compiler and runtime that is mindful about performance and resource consumption, building on the efficient interfaces to implement its abstractions.
These ideas have also been covered in this paper from 2003, including benchmark results that debunk the performance myth. What seems most interesting about this paper is that it theorizes such a compiler and runtime that would allow “overcom[ing] limitations in current threads packages and improv[ing] safety, programmer productivity, and performance”, by using techniques such as dynamic stack growth, stack moving, cheaper synchronization, and compile-time data race detection.
That exact mix, including all of the properties described in the paper, are available today in the Go language. You can have synchronous programming, concurrency, parallelism, and performance. We live in the future.