WN #21

Looking Back on the First 20 Issues

Looking Back on the First 20 Issues

As we close out the school year (for us on the American side at least), I thought it would be a good opportunity to look back at the 20 issues of the AI for Educators Weekly Notebook that were published throughout the year. And while they haven’t always been weekly, we’ve certainly covered a lot of ground, from foundational knowledge (such as understanding why AI isn’t perfect, and why that’s ok (8/30/23), or what makes AI different than other edtech “fads” (12/5/23)), to getting the most out of chatbots (The Value of Iterative Dialogue (10/17/23), If You Can Teach, You Can Prompt (12/20/23)), to specific uses in your classroom and workflow (Using AI in the Writing Process (1/6/24), Breathing New Life into Old Material (2/4/24)). 

We’ve also taken a look at ethical issues and controversies that can spark rich classroom discussions, like The New York Times’ lawsuit against OpenAI (1/22/24) and the kerfuffle over Google’s image generator (3/26/24), and helped you keep up with the evolving landscape of AI platforms and tools, such as breaking down popular education-centric AI products (4/28/24), or ChatGPT’s latest updates (5/16/24). These are just a sampling of posts that you can find in our full archive (click on the link and scroll down).

We’ll continue posting throughout the summer, so if you are like me and your school email tends to collect cobwebs until mid-to-late August, make sure you are getting these sent to an inbox you’ll actually be accessing.

“The Library” provides links to AI in Education reads from across the web. Here are 3 of the most popular from the first 20 issues:

Opportunity for All? (from the 10/4/23 issue)

AI has the power to be a democratizing force in education. Costly private tutors and services for everything from test prep to college application assistance have, up until now, been reserved for students from higher economic brackets. AI can change all of that, so long as we prioritize accessibility and equity.

GenAI and SEL (from the 12/5/23 issue)

While most of the talk around generative AI and education has centered on academic integrity, a recent survey by the Center for Democracy and Technology found that, of students who used generative AI during the 2022-23 school year, just 23% said they used it for academic purposes, whereas 29% said they used it to help deal with anxiety or mental health issues, 22% for issues with friends, and 16% for family conflicts. This data suggests we should be placing more focus on the impact generative AI may have on social-emotional learning, for better or for worse.

What’s Next for Writers? (from the 2/19/24 issue)

One of the big discussions surrounding the emergence of generative AI technologies is the potential impact on the writing profession. In an opinion piece for the LA Times, one writer explains why she isn’t worried.

📌 Chatbot Tidbits

-ChatGPT can now be accessed at its new address, chatgpt.com. It can also now connect to your Google Drive or Microsoft OneDrive, open web links that you give it, and carry on voice conversations on your mobile device even when you don’t have the app up

-Google Gemini can now be accessed by typing “@Gemini” into the address bar of the Chrome browser. Gemini Advanced now includes access to Gemini 1.5 Pro, which has an industry-leading context window that can handle huge amounts of inputted info (for example, large PDFs), and you can also now upload spreadsheets in addition to Docs, PDFs, and other files. 

-Claude has a new iOS app.

📌 Gemini for Education

Google has launched Gemini for Education, a paid add-on for Workspace for Education accounts that gives educators data-protected access to Google’s top AI model, Gemini 1.5 Pro (see above), and embeds Gemini in Workspace for Education apps like Docs, Sheets, Slides, and Gmail. If you want to try out some of these features over the summer, you can get a 2-month free trial of Gemini Advanced (using a personal account, not sure the trial will work with edu accounts). If your school uses Microsoft products rather than Google, they have a similar offering with Copilot for Microsoft 365.

📌 AI is Everywhere

AI continues to be integrated into everything, highlighting its staying power and the importance of building AI literacy. Google is testing out integrating Gemini directly into Google Drive, and is releasing Chromebook Plus laptops with built-in AI. Microsoft is following a similar path with Copilot for OneDrive set to arrive this summer (the article says April, but this has been pushed back), and AI-powered Copilot+ PCs. And, just this week, Apple announced new AI integrations across iOS, iPadOS, and MacOS, including a partnership with OpenAI to integrate ChatGPT across its platforms. You can check out some of these features here.

“The Lab” is a place for tips, tricks, tutorials, and ideas. Here are 3 of the most popular from the first 20 issues:

Challenge Students to “Beat GPT” (from the 10/17/23 issue)

Getting students to genuinely critique the work of peers can be challenging. There are lots of social risks and pressures at play. Give them a robot's work, however, and watch them tear it apart as they try to show they can outdo AI. “Beat GPT” is a simple and fun way to help students learn about AI while also building traditional academic skills and content knowledge.

Adding Personalized Reflection Questions (from the 12/20/23 issue)

Using AI to help provide feedback on student work? Try adding this to the end of your prompt/instructions to leverage personalized opportunities for reflection:

"End feedback by asking questions or providing prompts that lead students to think critically about their own work."

For simplicity's sake, you can also skip the rest of the feedback and just have it ask the submission-specific reflection questions. Have students respond to the questions, then revise their submission based on their responses.

Sometimes All It Needs is a Nudge (from the 3/6/24 issue)

I recently saw users on X complaining that Gemini would answer questions about Joe Biden but not Donald Trump. Of course, this led to outrage, and it is something that Google should work to fix, but it’s also a very superficial issue related to AI misinterpreting instructions that’s easy to get around with a little prompting know-how.

If a chatbot tells you it can't do something that you think it definitely can, sometimes all it needs is a slight nudge. It's simple, but something you wouldn't necessarily think to do if you aren't familiar with how chatbots work. Here’s an example of what I mean:

While Google does have some election guardrails in place for its Gemini chatbot, as you can see here it clearly isn't applying them consistently:

The guardrails likely keep it from generating opinionated takes related to elections, so it gives an impartial overview of arguments for and against Biden, but avoids the question altogether with Trump. Same instructions, different executions. Easy to fix. Just help it along:

Teachers, sometimes you have to treat chatbots like students. Respond to their hesitations with a helpful nudge, be it a shot of confidence (seriously!), or rephrasing more explicitly and including an example if necessary. You are already more skilled at this than most!

The fascinating world of AI, outside of education. Here are a few of the most popular tidbits from the first 20 issues:

That’s all for this week! If you appreciate the content in this newsletter, consider subscribing for free, or sharing with people in your network who may find value in it. If you are looking for more, feel free to check out the archive to catch up on any editions you may have missed.