Why did academia settle on the idea of no directed research?
I am early into my involvement in the job market for new PhDs as an academic, though I was involved for many years hiring as a central banker. The two activities differ in many respects. But the one that struck me is the risk incurred hiring a new PhD into a university economics department on account of not knowing how well they will make choices that affect the productivity of their research.
These choices include: how hard to push to publication what they did in their PhD, and when to give up and try something new. How much to specialise and deepen versus diversifying. What topics they will choose next. What skills they will invest in acquiring in the future.
Following a new hire into a central bank, a good deal of this risk can be mitigated by directing research to some degree; having those decisions made by a more experienced hand. At one extreme this could and sometimes does mean having the junior hire work on ideas suggested by the research manager. But it also includes agreeing plans of action on all the things that affect the development of the junior hire's own research.
Why, I wonder, did the industry settle on this model in academia? Would it be so unpalatable to new PhDs? A loss of some independence is bad if one values it for its own sake; but if it comes in return for extended supervision and guidance, then it might be attractive.