For Better or For Worse
There's a meme developing in the greater programming community about the "objective quality" of the design of Go. I most recently encountered it in Honza's self-described rant on language choice, where it was well put:
Mind you, the language is objectively poorly designed. [...] And yet, Go is a lot more popular than Haskell according to GitHub. Yet, there are so many amazing projects written in Go, like Docker, Influxdb, etcd, consul, prometheus, packer, and many more.
I think this is an interesting set of positions to have simultaneously, and the author agrees.
You couldn't assert Go was poorly designed with such certainty and without any supporting evidence if this opinion wasn't popular. The groundwork has been laid the past year with plenty of blog posts and articles. They include people who don't like Go because it doesn't make them feel clever or because it doesn't do anything new. They feel that Go, by intentionally ignoring certain advances in language design, lives in a past that better languages have moved on from, and they'll say so quite forcefully.
When people with this mindset try to explain Go's popularity, they inevitably come upon a paradox. If Go is bad, why is it so popular?
The cognitive dissonance is particularly difficult to contend with this time around. Unlike other derided but popular languages, Go was not forced upon people. It did not gain a critical mass despite its failings on a new platform being homsteaded by primarily amateur and hobbyist developers. It was developed at the largest internet company on earth by some of the most accomplished programmers in history, and saw its first uptick in adoption in distributed infrastructure projects.
To overcome this paradox, people will generally either conclude that everyone is stupid (which many people in the programming community unfortunately seem quite willing to do), or that there are other factors driving the adoption of Go. That it might actually be good, if by a different set of requirements, is not considered.
worse is better
This is a modern manifestation of what Richard P. Gabriel explored in his classic essay "worse is better". In it, Gabriel establishes a set of four values in system design that are in conflict: simplicity, correctness, consistency and completeness. He then posits two competing philosophies: the MIT method, valuing correctness and consistency most, and the New Jersey method, valuing simplicity of implementation the most.
Even if the New Jersey method was a post facto straw-man description of the Unix philosophy or that of its creators, make no mistake that there has been an explicit endorsement of simplicity as a guiding philosophy from Go's creators. Simplicity of implementation is a stated concern in the language and the libraries, and has explicit precedence over consistency embedded in its design.
Simplicity has arguablly become an even more accepted primary goal since Gabriel's 1990 essay as computing has changed tremendously.
To cope with the growing amount of data, computation and storage have scaled out horizontally, and the primary approach towards dealing with this had a trend towards the simple. Systems touted in older literature like CORBA or the ironically named SOAP were eventually buried by vastly simpler systems whose failure modes were less elegant but whose implementations were magnitudes simpler.
The way software is developed has changed as well. A great deal of companies are built in no small part using open source software and contribute to the same. While simple code is easier to write, read, debug, and maintain, simple languages have the advantage of being easier to learn, which leads to more traction, more libraries and more contributors, increasing short term viability and long term quality.
All this isn't to say that simple necessarily implies a philistine refusal to incorporate advances made in the academic sphere. Complex things like Paxos just eventually lose favor to simpler things like Raft.
the nature of quality
Despite the trend towards simplicity, discussion of programming languages happens primarily within the framing) of the MIT method. This has been so successful and pervasive that it's no longer remarked upon or even obvious.
Complex type systems that can formally be shown to make certain classes of bugs impossible to express in valid programs are praised regardless of their impact on the difficulty of implementation or the cognitive cost to the programmer. Compromising these systems deliberately for the sake of simplicity is always seen as a deficit rather than a conscious tradeoff.
The problem is that this rubric for assessing the quality of programming languages has no obvious connection to their effectiveness in building software systems.
The approach taken with Go was to start with C, remove things that were difficult to use correctly, and fill in the gaps until there was nothing left that was sufficiently simple or sufficiently orthogonal to add. Take a known set of tradeoffs, a known point on the continuum between simple and sophisticated, and make minor adjustments based on dozens of accumulated years of practical experience with real world development. This is an unapolagetically straightforward engineering approach to system building.
Of course, you can't sacrifice everything on the altar of simplicity. Many low level bytecode and assembly languages are trivial to implement and port but the interfaces are too difficult for programmers to use. A balance has to be reached between the number of features and the simplicity of the language.
We've seen time and time again that systems that never attempt to do the right thing have interfaces that are too complex to use, and systems that sacrifice simplicity for completeness have implementations that are too complicated to get right. Systems at the complex end of the spectrum often achieve the "right thing" via behaviors that are undesirable for practical reasons, or claim to do the right thing but don't. Sufficiently useful abstractions will leak, so it's best that their behaviors are simple enough that their impacts can be understood up front. Bugs have a complexity commensurate to the complexity of their system's implementation.
Where the jury actually comes out on Go may take years to determine. There are no clear formal methods of measurement for how "good" a language is, so it mostly happens by default as popular systems thrive and unpopular ones wither and die. Despite its detractors, Go continues to grow in popularity due to its strengths, which means it has earned a longer timeline for appraisal to match its aspirations. Whether its perceived weaknesses will begin to tell in the real world is something we should start seeing as the systems that have been borne from it start to age, but so far the outlook is good.
It's telling that much of the adoption has been for projects that 5 years ago probably would not have even been attempted due to the poor state of the tools in that niche. Even if Go ultimately fails, the longer it continues to grow the more likely it is that the next generation of languages start with Go in the same way that Go started with C, and given the latter's pedigree and staying power, I'd say that would make it a huge success.