As hardware is becoming parallel in response to hard physical limitations, parallel computation has moved to the forefront of grand challenges in computer science. One of the several facets of this challenge is the design and development of programming languages for expressing parallel programs, and compilers and run-time system for obtaining efficiency in practice. In the current state of the art, it is perfectly common to write a beautifully parallel program and observe that parallel executions are actually slower than the sequential. In this talk, I will describe the reasons for the current state of art and offer several techniques for writing parallel programs at a high level while also guaranteeing efficiency on modern multicore computers.