Share

generative AI for transformation- key takeaways

Written by

Read time

5 minutes

Category

AI Summary:  Core conclusions from MIT’s Applied Generative AI for Digital Transformation class.

A true-believer in life long education, I just wrapped up my latest course at MIT. Applied Generative AI for Digital Transformation was an 8 week course and I put in roughly 80 hours on weekends and nights. The course is part history and part hands-on learning, reviewing LLM fundamentals and how the parts work together so that people can use an agent or create their own, and everyone signs up and uses dozens of tools to get comfortable with the lessons. A large part covers ethics and case studies on how to implement into companies. It does not tell you how to insert AI into your own company, just gives you the tools to do that if you want to. Here are the key takeaways:

AI is changing faster than MIT can keep up. 

Even though the school is known for its robust technology leadership and even the professors at MIT are some of the best in the world; writing and developing course work takes time. The course work laid an excellent foundation for the history of AI and the elements that make up systems. However with weekly, if not daily, leaps and announcements of new tools in the highly competitive AI marketplace, what was written a year ago is yesterday’s news.  

To mitigate this, there is a robust AI group that we all joined, from developers to the c-suite students. The professors also supplement with weekly live sessions. These are available to all students of the global program, current and graduates alike. But by the end of the class, some of the coding homework was fully accessible to everyone; no one needed to code to create an agent because there were natural language options available to assist. 

Ethics: Ivory Tower vs On The Street

Before selecting MIT, I checked out numerous other programs and no matter which institution offered a class on AI, each syllabus had a robust section on ethics. Clearly, global leaders see this as a paramount issue and executives are being trained on how to ethically implement AI. 

But having attended meet-ups, living in SF,  following news and pundits, and watching how AI is being implemented around us, you can see that ethics in AI are clearly missing– at least outside of Europe. The reduction in entry level jobs with no clear alternatives for career paths, the enshitification of the internet, to AI being used to vet resumes or insurance requests clearly put the dollar as the main driver, not the humane treatment of our fellows. Let alone the security issues when AI becomes the main tool of hackers and bad acting social engineers. The tempting gold rush of AI is simply too strong to succumb to the higher reasoning that human rights and human dignity matter. 

Implementing AI at scale will mean rewiring your business

AI has been used in our business at 300FeetOut for a long time, from code repos to photoshop to reporting. And those are all small gains that are easily put into place to save time. But to truly bring AI into the workflow, it will mean an operational change management plan. You can’t shoehorn AI into the systems you already have, they simply aren’t set up for it. To do it right, you need to rebuild from the bottom up to see the gains and opportunities.

AI Core Conclusions

AI is here. And it’s changing rapidly and it’s not 100% safe for humans. We need to ethically implement organizational structures that leave space for the connections between people that serve the greater purpose. The MIT class provided a robust foundation for how AI has been built and a glimpse into how it can transform business. It felt a little bit like learning how to build an engine when I know that the entire transportation system and global economy that contains the greater impact is still yet to come.

Send cool stuff to my inbox
Enter your email

©2025 300FeetOut All Rights Reserved | Privacy Policy