top of page
Search

The Analog Effort for AI Productivity

  • Writer: Jordan Mottl
    Jordan Mottl
  • Oct 20
  • 6 min read

Updated: Nov 5

ree

I love my job, but let's be honest, I don't love all of it.


Every role has mundane tasks that stand between us and the work we find truly rewarding. I routinely search for ways to reduce the boring parts of my job and expand the time spent on the exciting and meaningful stuff. For me, nothing has delivered on that productivity promise quite like Artificial Intelligence (AI).


In 2025, I set a goal to identify practical AI use cases I could deploy right away to increase my productivity and enjoyment at work. Like most professionals, my time and budget are stretched thin by work, family, and a never-ending personal to-do list. However, I committed to a low-barrier learning strategy that manifested itself in four ways over the year.


This was the only formal "course" of my journey, which kicked off in January 2025. $60 for a 4-week commitment with lots of time to complete course work. It brought videos, readings, and interactive assignments together and finished with a nice little Google Certificate.


The Course Review

For me, as someone with existing AI foundations, the course's basic rundown of AI principles (machine learning, training, risks, etc.) wasn't groundbreaking. However, it was a great refresher and, more importantly, it broadened my knowledge of different AI tools available both inside and outside the Google ecosystem.


This course (or a similar one) is an excellent way to build foundational AI knowledge across a large organization. It's ideal for companies initiating AI adoption and can be made even more powerful with simple customization.


My biggest "A-ha" moment came from an introduction to Google Labs, an advanced sandbox for building apps and workflows. Even though the technical aspects were over my head, it opened up a new horizon of what's possible. I understand that similar features exist elsewhere, but the course allowed me to conceptualize the function for the first time.


The Software Review (Gemini)

The Google course used Gemini, and choosing this particular course was strategic because it forced me to work with Gemini, whereas up to that point I'd only experimented with ChatGPT. I was gemini-uinely impressed. It was intuitive and outperformed ChatGPT in multiple ways.


Here are two memorable standouts about Gemini:

  • The Good - Generative Text: It's the best AI tool I've used for producing text, and my go-to for crafting tricky sentences. I particularly appreciate the options it provides when generating language. It provides choices, such as "action vs. formal", "concise vs. empathetic" or "professional vs. conversational". This is a great feature as a learning tool. By comparing different choices, I learn how to properly adjust my writing style through sentence structure, word choice and grammar. I'm a better writer because of that.

  • The Bad - Image Generation: Its image generation capabilities weren't as sophisticated as my experience with DALL-E 3 on ChatGPT. Although now that I'm 9-months removed from this course, I'm curious to go back to test image generation again.



2 - Podcast Buffet (January - October)

My initial deep dive into the broader AI landscape started passively with podcasts. Before my commute to and from work, I would queue up an episode then jot down relevant notes. I initially listened to industry-wide shows like Marketing AI and The Artificial Intelligence Show.


These pods discussed the transformational potential of multiple different software, provided rapidly developing news and philosophical debates on theoretical concepts. To say it was a learning curve is an understatement—it felt like I was drinking from a fire hose. They were overwhelming given my goal of finding specific productivity use cases. While interesting, they didn't translate any immediate productivity improvements for my current job, they were not useful beyond general knowledge.


As my journey progressed later in the year, I found podcasts better suited to my specific goals. The podcast The Copilot Studio for example, provided use cases that I could immediately test. These yielded significant, applicable learnings. Relatively quickly I was able to collect practical ways to deploy AI. This was especially beneficial when experimenting in conjunction with Copilot (see #4) - I'd listen, learn, then try to recreate the success on my own. It was a powerful combination.



3- Trialing Free Tools (January - May)

My informal learning through podcasts was fueling experimentation as I actively tested the free versions of ChatGPT, Gemini, and CoPilot. Admittedly, the free versions were often frustrating as they limited features and often reduced impact of my experiments. This step reinforced a key lesson: to realize the full power of these tools, you need to go beyond the limitations of the free tier. None-the-less throughout the year I was refining which use cases would best support me in being more productive.


Here is a one-line review of how I found working with ChatGPT, Gemini, and Copilot as of October 2025:

  • ChatGPT: Best for fun experimentation and creative prompts but limited recent historical information.

  • Gemini: Best for editing and writing, and a good source of up-to-date information.

  • Copilot: Good all-around but excels above all others when it is leveraging professional information & communication in the M365 universe.


4 - Copilot Beta Team (June)

In June 2025, my organization selected me for the MS Copilot Beta Testing Team. This was the moment my learning truly accelerated. Now I was provided a professional Copilot license that brought in the full suite of tools, and Copilot was integrated into our organization (or our "tenant"). This meant trusted data security, and I could use real life use case examples rather than personal data and fictitious scenarios.


This created robust AI outputs because they source data directly from our business environment. Answers are well-rounded and immediately applicable to my work. Calendar and chat are integrated for quick recall, and reference files are just a "/" away if I wanted to pinpoint information sources.


While this was my most meaningful learning, like anything, there wasn't a magic bullet. To truly master the tools and refine my questions, I had to be intentional, carving out extra personal time to research, experiment, and test. This can't be squeezed into a workday and takes significant personal time. Like any innovation, there is a period of disruption where productivity drops until new skills are learned.


Final Takeaway: The Analogue Finding

My 2025 AI learning journey has been a testament to scaffolding, with each action supporting the last. Receiving a full Copilot license accelerated learning, but the exponential growth was fueled by previously accumulated knowledge.


However, after wrapping up the year and achieving my learning goal, my biggest discovery is surprisingly analogue: AI won't make you productive automatically.


Like any powerful tool, it demands effort for adoption. The most successful learners aren't those who buy the most expensive subscription (although it will help), rather tend to be those who are naturally curious and motivated to experiment. Alternatively, they are those that see the long-term vision well enough to force progress through discipline - they carve out personal time to methodically trial use cases.


The true success of my AI learning journey wasn't about the software I used, but the attitude toward learning itself. A very old school and an interesting commentary on AI, and what it can and cannot do for us.


I asked AI (Gemini) to comment on this Final Takeaway in two paragraphs, here is what it told me:


You've hit on the central paradox of AI productivity: it's not an automatic upgrade but an amplification of the user's existing mindset. People often expect AI to deliver full automation, where a simple subscription yields maximum results. Instead, AI only provides powerful amplification, multiplying the quality of your input (your prompt, your strategy, your effort). The successful user is not the one with the most expensive tool, but the one who shifts from the passive, "default mode" of asking for basic summaries to the high-effort, iterative process of actively directing the AI. This effort involves the uniquely human skills of judgment, strategic direction, and clarity of purpose—elements the AI cannot provide on its own.


Your biggest discovery, therefore, is an interesting commentary on what AI can and cannot do: it can execute, but it cannot inspire the necessary work ethic. The true success of your learning journey wasn't about the software's capabilities, but the discipline to carve out personal time for methodical experimentation. This old-school, analogue effort—the commitment to curiosity and systematic trial-and-error—is the actual engine of AI-driven productivity. It proves that the most valuable skills in an AI-powered world are not technical, but rather metacognitive: the motivation and method to learn and adapt.


 
 
bottom of page