Ted Dzubia, an internet-notorious opinionated windbag and adept comedic writer, recently wrote a scathing post about Node.js, which was half in reply to a somewhat bizarre rant by its author Ryan Dahl on the state of software and a desire to retreat to the simplicity and stability of the Unix Philosophy.
Central to Dzubia's rant seems to be an ad-hominem attack against Dahl and the whole "Node community", and a complaint about the promise that non-blocking I/O will scale when obviously blocking the CPU will stop all I/O activity. To the second point, Ted says it's a scalability disaster waiting to happen. He then goes on to complain about how Node encourages architectures that are against "The Unix Way", then describes a world where people are using Node to serve http directly from node's HTTP servers.
Being a Python programmer, Dzubia obviously knows about eventlet and the other event-based I/O systems for Python, and has correctly identified when it is you'd want to use them: when your processes are I/O bound and parallelism via threads creates CPU overhead that lowers your overall throughput. His complaints about Node on this point are mostly unfounded if you understand what event-based I/O is actually bringing to the table.
He would also know that there's been a shift in the way that web services are being deployed, away from tightly coupled (and complex) in-server modules like mod_php and mod_python, away from specialized backend proxying protocols like fcgi, and towards proxying simple fast-client HTTP servers like (g)Unicorn with simple slow-client HTTP servers like Nginx. Surely it's a more Unix-ey separation of concerns, and it's more Unix-ey to simply speak HTTP everywhere. Since Node's HTTP server would fit right into this emerging deployment method, I'm not sure what the fuss is about on this front.
As for achieving CPU concurrency in Node, the best way (as is with Python) is to use multiple processes. I'm not currently aware if there's a multiprocessing-esque library for node that makes this work well, but horizontally scaling CPU bound operations via processes seems to me to be very Unix Philosophy indeed.
I can't answer to the Node community's preferred way of doing things, since although I've used Node a bit I've not really participated in its community culture. I think I'll just say that there are more things in heaven and earth than serving web content.
I think the sweet spot for Node is the way it enables you to write small, stable, and fault tolerant daemons for shifting network traffic around from one place to another, or medium-scale centralization of concerns, like small caching proxies. These types of programs, statsd and my own similar logd among them, have very low CPU requirements but still require highly scalable I/O. Being able to run this in one process (or to have multiple backend processes watched by an overlord, for redundancy, which Node is able to do fairly easily) and to write very little code for this is quite nice.