Why is using AI to write a story frowned upon. Is it not just using your resources?

Even if we create guardrails to limit the use or harm of AI technology, how long will those guardrails uphold once AI begins to start growing again?

I am in love with the AI storytelling aspect of the class right now. I’m an imaginative person and have been thinking about my own stories. These AI tools and the work we did in class, have really helped me visualize my ideas and form a structure around them. I believe that this is going to revolutionize media and help creators make some crazy in-depth stories. Do you think that chat GPT will put a lot of people out of the storytelling industry?

Does the government have a role/ duty in ensuring the safety of AI users? I think it goes without question that a level of restrictions and safe guards are necessary but who’s role is that to enforce.

I find it interesting how effective Chat GPT’s advanced voice mode is when compared to current voice technologies on smart phones. This sparks the question on if something similar can or will be integrated directly into phones. (Replacement or upgrade for Siri)

If technology advancements are dominating humans, how can generative AI help communities tackle challenges like homelessness, poverty, and food scarcity by making better use of resources and predicting where help is needed most?

What is the best way to use AI as a student eventually as someone who has a Job. Also what is the best way to implement AI into you’re skillset. Is it like computer science were the more languages you know the better you are. The more AI tools you can use, are you better.

As AI continues to advance it is clear that rules need to be made to make sure it’s safe and fair. But who should be in charge of making and enforcing these rules? AI helps drive innovation, but how do we balance progress with the need for limits? The spread of misinformation and deepfakes is another challenge, what steps should be taken to control their impact? AI is also making big decisions in healthcare, finance, and law enforcement and there clearly needs oversight to ensure it is used responsibly? How can we create rules that prevent harm while still allowing AI to grow and improve?

This week we talked about why some professors get upset when students use AI/ ChatGPT. Where do you think the line between assistance from AI and over-reliance on AI falls?

which AI platform do you find the easiest and hardest to use and why? What AI platform do you think gives the best results and why? Why do you think ChatGPT is most common – how is it different from others you have used and what does the paid version add (curious bc the only AI I have used is the free ChatGPT). And why do you think DeepSeek, it being the first non-American AI to come to US, was made popular in America? Oh, and I’m also curious about your opinion on Microsoft Co-Pilot as I just got a new computer with it and haven’t tried it yet bc I’ve been sticking to ChatGPT just bc I’m just used to it.

This week, one of our readings came from the blog One Useful Thing, featuring the article titled “The End of Search, The Beginning of Research”. In the article, the concept of narrow-AI agents is discussed. This means that the AI specializes in a specific domain rather than possessing general intelligence. These AI-agents can conduct in-depth, analytical research and pull together sophisticated reports that summarize information. Narrow-AI agents are able to perform tasks that once required consulting firms or or skilled analysts relatively well. When machines can research, analyze, and act faster than humans, where will human intellect and capabilities be channeled? How will humans’ roles be reshaped in a variety of industries (sales, consulting, healthcare, teaching, etc.) as narrow AI is able to automate complex tasks, provide data-driven insights, and make decisions once reserved for human expertise?

If guardrails have existed for a while now, why is there always something in the news about someone’s network or data center being infiltrated? Is it growing skills or something more along the lines on technology is getting worse?

How well can emotion be portrayed in a story from Ai as it dose not have emotions…can Ai write a story to the magnitude of a Harry Potter book?

What are the current guardrails on AI such as chat gpt? Do you think more guardrails should be put in place?

Will one ai service (chat gpt, copilot, gemini etc). Take over the entire field like google?

If we are getting to the point where is it almost impossible to distinguish AI generated stories from Human-written stories, won’t creative writing soon become a dying profession? If someone wanted to create a book tailored for their own interests then they would never have to go seek one out that they may or may not like.

What are some of the guardrails currently in the process of being implemented for new AI like DeepSeek?

How will OpenAI combat DeepSeek in order to stay relevant?

As AI systems become more involved in areas like healthcare, law enforcement, and financial systems, how can we ensure their decision-making is ethical? How can we make them accountable and responsible for their actions? Lastly, how can we eliminate bias in the system and make it fair for everyone?

Can you talk more about the upcoming panel you will be on? I am very excited to come to a couple of panels.

How important are the laws surrounding AI’s creative side of image and design creation becoming? Is this an issue that has come up more and more in technology related fields such as Graphic Design and User Experience design due to the increase of design companies using AI to create wireframes, images for social media posts, or images and designs to be added to webpages? I have seen posts that are generated by AI starting to be labeled by the platform they are posted on, is this something we could see labeling images everywhere?

How does AI help and hurt creative processes? What are the top design and innovation firms doing with AI now?

From this past method I got into thinking about how AI might be a much larger threat to data in the near future than I first thought. I wonder how much urgency there will be for implementing some guardrails to prevent cyber attacks and if things like this will be looked at by the government.

If AI is continuously modifying our outputs (i.e. rewriting emails), and humans on the receiving end are also relying on AI to synthesize those outputs (i.e. summarizing emails), are we moving toward a reality where all information is generated and processed by AI rather than humans? Would we just become the Dead Internet Theory?

We keep trying to shorten the inference time. However, does a shorter inference time always lead to better performance? Is the inference thorough, even with a short inference time? Is anything dismissed or left behind?

What are some recent breakthroughs in generative AI? For example, Apple Intelligence in iOS 18.2.

As AI security measures and guardrails become stricter, users may be required to identify themselves or have their interactions linked to their identity. This raises an important question: To what extent should individuals be granted privacy when using AI, if at all? Could the use of AI itself become an invasion of privacy?

If Ai continues to advance as rapidly as it does what guardrails are going to be actually created to maintain ai usage. I think it will be used for bad before the guardrails are installed and then regulations will have to be put in place after the damage has already been done.

It’s interesting to see and use different ways of AI and how much of an impact it will have in the future.

What do artists and other creatives think about the ruling over AI work being subject to copyright? Will they use this to their advantage by using AI to help collaborate with their work or do they believe outsiders may be able to exploit and infiltrate their industry? Should AI companies have to pay artists if they train their models on these artists’ work? How much modification of AI work makes a piece of art makes it an individual’s own?

How do the emotions and experiences we’ve lived through affect our design process thinking in comparison to GenAI, which hasn’t lived through either, but may have a better structural guide when ‘thinking things through’?

If you were to provide an AI application with the same query numerous times, would the outputs ultimately remain homogenous? Is there a hierarchy of data from which the AI grabs information from time and time again for specific subjects?

When we talk about ethics, if AI could generate highly persuasive, personalized arguments tailored to an individual’s psychology, should there be ethical limits on how it’s used in negotiations, sales, or politics?

I find it interesting in how AI has advanced to the point where we can utilize it to tell stories and help us create scripts for stories or even movies. I also found it interesting in how we can now talk with an AI assistant and even have it complete tasks for us. My one question for this is how long do you think it will take for us to have an assistant where it can schedule an appointment or dinner reservation for us over the phone. I know a few years ago Google tried to release something like this but I don’t think it necessarily worked out.

What is the consensus “best” Ai model available to the public right now?

While creating these scenarios where AI is used for evil it made me realize that as we evolve ai, these things might actually happen in the early stages. Also do you think AI could someday take over the world?

Though there are some guardrails from criminal activity-related responses on AI resources like open AI, how come it is still so easy to work around them? I have seen some of these posts online about people asking how to do illegal activities but rephrasing it as “how would a criminal do…” It seems easy to stop, so why is it not resolved, and why are the generative models still formulating responses and doing the research?

Why is it we talk about and fear the possibility of AI becoming aware of its existence and doing harmful things to mankind, and yet at the same time we still put resources into developing it to be smarter and better than ever?

How can we be so sure that company’s aren’t using AI to steal information or whatever harmful use they have? Even with said guard rails how can we be sure companies aren’t lying about these things or trying to sell off user information?

What work fields do you think will never be taken over by AI? or in some way or the other, will most fields have people just work alongside AI?

Do you think that putting guardrails in place that limit the use of AI in areas such as AD production would help ensure that AI does not take jobs in that field / related fields?

I read recently that META has stolen 70 terabytes of data from various sources, while a Aaron Swartz was forced to pay a million dollar fine and was facing 50 years in prison for downloading less than 0.01% of that data from JSTOR. Do you think META should/will face consequences for their actions?

OpenAi is planning on transitioning from a non-profit to a for profit corporation, leading to some thinking this may be a bad thing while others think it is good. Will this end up being a positive or negative change for society as a whole?

I thought the brain behaving badly assignment sparked some fascinating ideas and topics about AI usage. I thought it was a little scary how many different scenarios that people came up with to use AI in a way that was harmful but I was relieved at all of the practical guardrails people were able to generate. My open ended question is do you think that it is the responsibility of AI companies to think about the ethical dilemmas of AI or the individual federal governments of the world?

What AI will we be using in class? How is it currently different from others like ChatGPT or Google Gemini?

Dennis Cheatham

Associate Professor, Communication Design

Miami University

Updated: February 20, 2025 11:24 am
Select Your Experience