====== Ethan Mollick ====== **Ethan Mollick** is a researcher and educator at the Wharton School of the University of Pennsylvania who specializes in the intersection of artificial intelligence, innovation, and organizational change. His work has gained significant attention for examining how institutional, political, and professional factors shape the real-world deployment and adoption of AI technologies, particularly in regulated industries. ===== Academic Background and Position ===== Mollick holds a position at Wharton School, where he conducts research on how emerging technologies are adopted and integrated within organizations. His scholarly focus extends beyond purely technical considerations of artificial intelligence to encompass the socioeconomic, regulatory, and political dimensions that determine whether AI capabilities can actually be deployed in practice, regardless of their technical feasibility (([[https://www.wharton.upenn.edu|Wharton School of the University of Pennsylvania]])). ===== Research on AI and Professional Regulation ===== Mollick has developed influential frameworks for understanding the barriers to AI adoption in regulated professions. His research argues that the deployment of AI in sectors such as medicine, law, psychology, and banking is not primarily determined by technical capabilities or algorithmic performance. Instead, he contends that political and professional factors play a decisive role in shaping what AI systems are permitted to do within these fields (([[https://www.theneurondaily.com/p/subq-ships-12m-tokens-at-1-5-the-cost|The Neuron - AI and Regulatory Markets (2026]])). Specifically, Mollick's analysis suggests that established professional classes—including physicians, attorneys, psychologists, and financial services professionals—exercise substantial influence over regulatory policy through their participation in political processes and their financial contributions to political parties. These professional groups have structural incentives to limit the scope of AI applications that might disrupt their traditional service models or reduce demand for human practitioners. This creates a situation where technical feasibility of AI solutions becomes secondary to political economy considerations in determining practical implementation. ===== Implications for AI Deployment Strategy ===== This framework has significant implications for AI companies and policymakers. It suggests that even highly capable AI systems may face regulatory or professional barriers to deployment in regulated industries, necessitating strategies that engage with professional organizations, regulatory bodies, and political stakeholders. The analysis implies that purely technical improvements to AI systems, without corresponding attention to professional and political dynamics, may prove insufficient for market adoption in these critical sectors. ===== Recognition and Influence ===== Mollick's work has influenced discussions among technologists, entrepreneurs, and policymakers regarding the realistic barriers to AI adoption beyond technical constraints. His perspectives are particularly relevant to discussions about AI governance, regulatory capture, and the role of established professions in shaping technology policy. ===== See Also ===== * [[recursive_superintelligence|Recursive Superintelligence]] * [[anton_korinek|Anton Korinek]] * [[import_ai_newsletter|Import AI Newsletter]] ===== References =====