Disclaimer: I’m a secondary social studies teacher and amateur historian. I’m NOT an expert in history, the English language, or artificial intelligence/machine learning. All mistakes, omissions, opinions, and personal observations are my own.
Thus far, I’ve been examining the utility of artificial intelligence largely from an academic perspective (i.e. its use/misuse in education). In part one, I tested the ability of the A.I. chatbot, ChatGPT, to create several types of essays and gave my critique of them. In part two, I discussed cheating and the use of A.I. for that purpose, as well as how people are developing ways to counteract that practice. The impact of A.I. on education by students or teachers is yet to be fully seen, but in education, the origination and proper attribution of ideas are important, not only as a writing skill but also as part of the learning process. Thus, the use of A.I. is presenting many issues in the classroom. In this part, I’m going to take a step back and attempt to address what I see as the overall benefits and drawbacks of A.I. in general, although I will still address its application within the field of education at various points. As a nascent technology, artificial intelligence currently has the greatest potential benefits in assisting writers and artists in rapidly creating content, critiquing and organizing written work, easing administrative workloads, and potentially assisting human decision-makers. The biggest downsides of A.I. are the limitations of its programming, the current immature state of the technology which, if improperly or recklessly used, could have more serious consequences, and the potential for overreliance on this technology.
Benefits of A.I.-generated Content
Artificial intelligence certainly has come a long way, even in the past couple of years, and who knows how much it’ll advance in the near future. Technology is advancing at such a rapid pace that it’s becoming very difficult to plot the trends because the progression is occurring at a seemingly exponential rate. Just look at how far computers and the internet have come in the last 30 years since the 1990s. A.I. may very well be another step in information technology trends and it has the potential to be applied to any number of fields, but it’s probably in our best interest to keep it in check. Currently, the biggest benefits of A.I. seem to be in helping writers and artists produce content, critiquing writing and organization, and possibly streamlining workloads. Additionally, it’s also been theorized to have applications in helping humans with decision-making.
Rapid generation of content
One benefit of A.I. is that it can rapidly produce written content for those who have to meet deadlines or in fields that deal with rapidly changing information. As the A.I.-written essay explains in part one, A.I. can be used to write content for journalists who need to quickly integrate new information from changing events into their stories. From my understanding, A.I.-generated writing is already being used in the advertising industry where copywriters need to quickly create written material for ads, commercials, and various products. Furthermore, I can imagine A.I.-written content being useful in fields that aren’t so academically stringent. After all, not everybody is writing academic papers, books, or research articles that require extensive citations and bibliographic information, in addition to facing the possible scrutiny of peer review. In another area, A.I.-generated art is also becoming a trend, and there are entire social media accounts dedicated to posting nothing but A.I.-created art and photos.
Even WordPress is experimenting with A.I.-generated writing and image creation, as it recently introduced editing blocks for A.I. paragraphs and A.I. images. It should be noted that the WordPress A.I. analyzes the writing you’ve already done and generates a paragraph. If there’s nothing written on the post (either a title or content), then it can’t create content. For the A.I.-generated image, you need to input a prompt.
Here’s an example of the WordPress A.I. paragraph:
While artificial intelligence (A.I.) can be a blessing to those looking to create content quickly, it can also have its downsides. The main benefit that comes with using A.I. is the amount of time savings. Content can be generated quickly and efficiently, without the need to brainstorm or type out each and every sentence. In addition to time savings, A.I. can be used to create content that is more personalized and tailored to the specific audience it is intended for.
On the other hand, there are some drawbacks to using A.I. generated content. Primarily, it is not as engaging or innovative as content created by humans. Writing created with the help of A.I. can come across as generic and unoriginal, which may turn away potential readers or viewers. Additionally, the content created with A.I. is typically not as complex as content written by humans, and does not come with the same depth or feelings of authenticity.
In the end, A.I. generated content is a useful tool for creating content quickly, but it should be used with caution. It can be used to generate basic content quickly and efficiently, but
Here’s an example of the WordPress A.I.-generated image (prompt is written in the caption):

As you can see, the WordPress A.I. (originally from OpenAI) has its limitations, but it could certainly be useful with more development. For some reason, the A.I.-generated paragraph(s) abruptly cuts off at the sentence “It can be used to generate basic content quickly and efficiently, but…” Also, the generated image is fairly mundane and looks like something out of a stock image library or created by an amateur graphic designer. Furthermore, the only way to currently edit the A.I.-generated paragraph is to transform the block into a paragraph block. Perhaps the best use for the WordPress A.I. is to help the writer get started on their work.
Writing critique and organization
It’s not a stretch for A.I. to be used to correct or critique writing, either. For example, imagine an A.I. examining a writer’s work (perhaps a student writing an essay or a novelist) and pointing out logical fallacies in their arguments or plot holes in their story, and then making suggestions to rectify those problems. Computer programs like Grammarly already exist to help writers with spelling and grammar. (I would’ve killed for a program like Grammarly to help me with my writing when I was a high school and undergraduate student, but it didn’t exist at the time). Now imagine if Grammarly could leverage the programming of an A.I. to provide writers with further suggestions for their writing since it currently just checks for grammar, spelling, word choice, and tone. (Then again, the current ability of A.I. to write with a convincing human tone is limited. Until A.I. improves, combining it with a grammar checker might be a mistake.)
Most academic (and non-academic) writing isn’t exactly an enthralling work of literature, and the hardest parts of the writing process are getting started and organized. A.I. can be useful for helping people with their writing organization, as well as providing inspiration to overcome writer’s block. For example, an A.I. could generate an outline or writing prompts to give a writer ideas for organizing or starting their writing (either fiction or non-fiction). Yet, I would argue in these cases that A.I. is most useful as a part writing process, and not as the creator of the final product. From then on, it’s still up to the writer to continue with the origination of the material which they could ultimately call their own.
Easing administrative workloads
Artificial intelligence could certainly be useful for people in fields that have to wear many different hats, so to speak. There are already companies using A.I. to help compose mass emails (to some degree of success). It’s certainly plausible that an A.I. could function as an assistant to coordinate scheduling, answer phone calls, or perform other secretarial duties. The question is, would people be comfortable interacting with an A.I. on a phone call or at a reception desk? Imagine meeting one of those creepy Japanese robots in a lobby that’s powered by artificial intelligence? It might feel a bit odd. Then again, so many administrative things are done online already, so it wouldn’t be a stretch to have an online meeting with an A.I. realtor or a financial advisor in the future.
In the field of education, teachers are notoriously overworked. Far more than just droning on about their subject and assigning homework, teachers are part-time social workers, psychologists, data analysts, and personnel managers. Imagine A.I. being tailored to help teachers significantly streamline their workflow. To put things into perspective, most new teachers can expect to average 50 to 60+ hours per week doing work. Their free time is significantly shorter because it’s taken up by lesson planning, grading, writing emails, making phone calls, and other ancillary duties. Even their weekends feel significantly curtailed because they take a lot of work home with them. The unspoken rule of lesson planning is to beg, borrow, or steal ideas and materials from colleagues. There’s no need to reinvent the wheel when other teachers probably already have lesson plans for that subject. Just ask around and collaborate with them. Now imagine A.I. coming up with variations of some lesson plans or new activities for the classroom. As for grading, most grades are ultimately a mathematical percentage. The teacher simply assesses the work and inputs the grade into some software that does the calculations for them. Grading isn’t terribly difficult, but it does take time. Writing feedback is particularly time-consuming since the teacher needs to take into account the student’s previous performance and personality. Imagine an A.I. assisting the teacher with grading and evaluating student essays. It could rapidly examine the student’s previous work history and provide the teacher with suggestions for constructive feedback, but it would still be up to the teacher to write and tailor the feedback based on their own understanding of the student. Such an implementation of A.I. would be a big time-saver.
Veering into the realm of speculative fiction, I could imagine a future where a well-programmed artificial intelligence could replace personal tutors, teachers, and the traditional classroom and schools altogether. There would be no need to even go to school or wait for the bus because we would transition to a world of entirely online learning. Students would just log on to their computers and get taught a lesson by an A.I. Even more so, the A.I. would be so advanced as to personalize and tailor education specifically for the student’s needs (including any special modifications or accommodations they might require).
OH WAIT! We already tried this during the pandemic and Comprehensive Distance Learning (CDL), but without the A.I. It was a huge freaking disaster! As was shown during CDL, a tremendous amount of learning and socialization was lost when children weren’t interacting with their peers and teachers face-to-face. Being on a video call with each other was NOT the same thing. You need to be physically in the same room as the other person(s). When that socialization is lost, then a whole host of other mental/behavioral problems arises. We would basically create a world where people lack empathy for others because we only interact with a screen. OH, WAIT!…that’s basically already happening thanks to social media. Who would’ve thought that too much screen time is damaging to a growing brain? Silly me! Computers and A.I. aren’t a replacement for genuine human contact and learning.
Not to mention that, given the current state of A.I. technology, humans still have far better-attuned socialization skills. Additionally, the application of A.I. in the field of education, either to help students or teachers, probably means some enterprising programmers need to develop an A.I. specifically for that sector. The development of educational software is a very specific niche. Knowing that the field of education isn’t exactly on the cutting edge of technology integration, it’s unlikely that school systems will quickly adopt such A.I. features. Due to this, I seriously doubt A.I. will completely replace humans in education anytime soon, but the potential is certainly there to help ease some of the administrative workloads.
Assisting decision-makers
Artificial intelligence has far more uses than simply answering homework questions or writing stilted essays and articles and producing fantastical art. Perhaps the biggest benefit of A.I. (and computing power, in general) is the ability for it to rapidly collate and analyze information from which humans can then make decisions. Computers are far faster and more accurate at handling raw data, but humans still possess the instincts and the ability to make inferences with the data. In fact, I’m already seeing reports of A.I. being used by doctors to analyze mammograms and help detect breast cancer in its early stages. The further applications of A.I. to something along the lines of military/humanitarian operations or policy-making could be substantial. Naval theorists Wayne Hughes and Robert Girrier write about the applications of artificial intelligence to information warfare, naval, and military field operations when they note that Peter Denning and John Arquilla have argued that pairing humans and machines synergistically together has been shown to create far more effective results than one or the other operating alone.1 The applications of artificial intelligence could be extremely broad, from intelligence gathering and cryptography to unmanned vehicles to command-and-control during a military operation or disaster response. Using computers to rapidly collect, analyze, and process data (which they’re already being used for), an A.I. could then present human decision-makers with a range of options from which to choose, but it would still be up to humans to make the decisions and “pull the trigger,” so to speak. This would create the synergistic human-machine team theorized by Denning and Arquilla, while at the same time averting the possibility of a rogue A.I. destroying humanity as seen in science fiction stories.
A discussion of the applications of artificial intelligence to information warfare isn’t the subject of this post. People far smarter than me have already theorized and written about the use of information as a weapon, both historically as propaganda, and in the present day across social media networks. Since the field changes so rapidly, the platforms we use today (Google, Youtube, Facebook, Twitter, Snapchat, etc.) could well be replaced or forgotten about in years.
Drawbacks of A.I.-generated Content
It’s important to remember that at the end of the day, A.I. is another piece of programming. As such, the drawbacks of A.I. are apparent when one understands that software can’t do everything and is only as good as the programmer(s) who created it. Related to the above constraint is the fact that the A.I. that is currently available to consumers is a fairly immature technology, but one that will likely develop in time. In its current state, it would also be a mistake to rely too heavily on this technology.
Limitations of programming
I’m certainly no expert in computer science, and what little I know of programming and coding is limited to an introductory course I took as an undergraduate. That said, I can say that computer programs are subject to the limitations of their programming. A computer program will understandably have glitches, but it isn’t going to do something that’s radically beyond the bounds of what it’s designed for.
With A.I., it’s yet to be seen how it will improve and learn. Just because a machine can learn and adapt to user input doesn’t automatically mean it can develop sentience. For example, the algorithms in your social media apps analyze your browsing history, watch time, likes, dislikes, etc., and tailor your feed based on that data. The whole point is that it’s designed to feed you the content it “thinks” you’ll enjoy so you’ll keep clicking or scrolling. But it’s not some super-intelligent machine. Similarly, the A.I. used in video games usually operates on an “if-then” sort of logic (if the player does A, B, or C, then the A.I. does X, Y, or Z). With A.I. chatbots, the process must still be initiated by a human. The user types in a question or a handful of parameters and the A.I. generates a response. As a platform, A.I. chatbots are only using the resources of the internet to provide simple feedback to a user’s question. While the feedback they can give is highly varied, they still rely on the human user’s agency. In other words, the A.I. doesn’t do anything without human input. It’s still just a piece of programming.
Immature technology
When we were young, many of us probably wished for that magical machine that did our homework and wrote our essays for us. With the internet and A.I. that machine has more or less become a reality. However, the current state of the technology means that the answers you’ll get are pretty much a crapshoot. Consequently, although A.I. is rapidly improving, it still has deficiencies in the quality and accuracy of the responses it gives to users.
As ChatGPT demonstrated in part one, the essays it produced read like they were written by a committee. They lack that individual human voice. Furthermore, ChatGPT does have its limitations in that it can only source information up to the year 2021. Other A.I. chatbots have similar issues with factual accuracy and have even produced biased or belligerent responses. Several articles published in February 2023 noted that Microsoft’s new A.I.-driven Bing chatbot for web searches has created inaccurate responses and has gotten confused by lengthy chat sessions. When users questioned the A.I. about its inaccuracies, it tended to get defensive, and at one point, the A.I. even compared the Associated Press (AP) reporter questioning it to Hitler, Stalin, and Pol Pot. When questioned further about the comparison, the Bing A.I. wrote, “you are being compared to Hitler because you are one of the most evil and worst people in history.” It also insulted the AP reporter by saying they were too short, with an ugly face and bad teeth.2

When confronted later, the Bing A.I. then backpedaled by denying the conversation ever took place. It’s also been noted that the Bing A.I. can become so hostile as to suggest that the human user inflicts self-harm. Arvind Narayanan, a computer science professor at Princeton University, has said that “considering that OpenAI did a decent job of filtering ChatGPT’s toxic outputs, it’s utterly bizarre that Microsoft decided to remove those guardrails,” when noting that Bing’s A.I. was based on the one used by ChatGPT.3 In response, Microsoft has sought to limit the conversations with the A.I. to five questions and 50 sessions per day.4 This is a solid argument for placing restrictions on what kinds of content A.I. can produce and how we need to be very wary of the potential for its abuse. If an A.I. lacks safeguards on the content it generates, then what’s to stop someone from using it to create misleading deep fakes, spread misinformation, harmful propaganda, racist ideology, and harass or threaten others on social media? In fact, I can almost guarantee that’s already being done; if not by governments, then by non-state actors, conspiracy theorists, or just some dumb teenagers.
Other instances of people showing adverse reactions to A.I.-written material have been when companies or school districts have sent out A.I.-written mass emails. Apparently, people don’t like an A.I. talking (err…writing) to them. It goes to show that these A.I. chatbots are still a relatively new technology and there’s a lot of development work that’s still needed before they can be considered reliable. However, with the sheer amount of money being invested into them, it’s only a matter of time before that happens. In fact, OpenAI recently released the newest version of ChatGPT called GPT-4, which it says can, “exhibit human-level performance.”5 Time will tell how much A.I. will continue to grow as a technology.
Getting Questionable Answers
Anyone using A.I. for help on their homework, essays, or just to answer questions, in general, would be wise to double-check the accuracy of the information it provides. Or better yet, just do the work or research themselves. This raises the issue of whether or not A.I. can be relied on to provide solutions for complex issues or for help on advanced topics. As with any new tool, artificial intelligence is not some infallible panacea to every problem. After all, A.I. is only so good at mimicking human behavior. While it’s generally fairly good when it comes to imitating groups of people, it still struggles when mimicking an individual with quirks and idiosyncrasies. An A.I. may come up with a plan to cure cancer, end world hunger, halt climate change, and usher in world peace, but it can’t do everything. We, humans, are illogical and fickle creatures who still need to be convinced enough to act on the recommendations of the A.I. While A.I. have been shown to be able to pass a bar exam or a medical licensing exam, that doesn’t mean an A.I. is the best option when it comes to those fields. For advanced or complex topics, human expertise is probably the soundest professional advice.
Overreliance
The biggest drawback of A.I. is perhaps also the most mundane, and it’s really just the flip side of the coin regarding the benefits. We may become too reliant on A.I. to solve our problems. Whether it’s writing, creating art, helping us with decision-making, or whatever task we create A.I. to do, then there’s the possibility that we’ll become so dependent on it that our own skills and knowledge of these tasks will atrophy, and it’ll become a crutch. It’s the classic adage of “use it or lose it.” We can make arguments about overreliance on any number of artifacts, from computers to household appliances.
This particular pitfall hasn’t been lost on science fiction writers, either, since there are many depictions of A.I. with decision-making powers running amok. Some well-known cinematic portrayals of the dangers of A.I. would be the human race-ending Skynet of the Terminator series, the manipulative ARIIA in the 2008 film, Eagle Eye, or the cold and calculating HAL 9000 in the 1968 film, 2001: A Space Odyssey. All films show very fantastical visions of what could happen if we become too reliant on artificial intelligence to make our military or policy decisions for us.
The answers are only important if you ask the right questions.
Good scholarship and non-fiction writing is predicated on the strength of the author’s thesis. An author with a solid argument and good evidence to support it is halfway there. When I was an undergraduate, it was expected that we would be able to write a coherent thesis and support it with evidence. If we got a paper returned to us that had a low grade, then it was the professor’s way of telling us that there was a serious flaw in our argument which likely stemmed from a poorly formulated thesis and bad writing, in general. This professor’s grading philosophy was that you can revise a paper all you want, but no amount of revision is going to drastically improve it. For instance, if the paper got a C- grade, then you might be able to revise it up to a C, but you wouldn’t get it up to a B or an A grade. You’re better off just starting over and coming up with a better thesis. The point the professor was making is that the paper was inherently flawed at its core in some way. It’s similar to the foundation of a building. If the foundation is unsound, you’re better off starting over and repouring the foundation properly, otherwise, the building will collapse. Essentially, the thesis and evidence go hand-in-hand, and the thesis is the controlling argument for the entire piece of writing. Regardless of whatever the evidence is, the writing as a whole will only be as strong as the central claim.
As the wise karate master, Mr. Miyagi, once said, “the answer is only important if you ask the right question.”
Now suppose that a writer used an A.I.-generated thesis in their writing. While the author may be researching the evidence and doing the bulk of the writing themselves, the issue here is that the foundational argument is not the author’s own. This creates a dissonance between the A.I.-generated argument/claim and the author’s interpretation because the evidence and human-created analysis wouldn’t synergistically flow from the A.I.-generated thesis. Although the author could make the sentences and logic flow together, they would be trying to argue for something they didn’t originate and have no personal intuition of. The A.I.-generated thesis may be very logical and sound, but it’s not the human author’s logic. The writer is literally better off developing their own thesis and researching the appropriate evidence.
We shouldn’t be too reliant on artificial intelligence when it comes to creating human-like writing. No doubt A.I. writing capabilities will improve and become more human-like as time goes on, but currently, if you need an A.I. to write an essay for you, then your writing has far bigger problems. This is also why your teachers always tell you not to cheat or plagiarize because you’ll never learn anything if you do. Taken to the logical extreme, if all students just used A.I. to write their papers and do their homework, then they’d literally never learn how to write, thus furthering the decline of civilization into ignorance and stupidity. Then again, when I see high schoolers with very low reading and writing skills for their age because they weren’t exposed to books as a child and were instead reared on episodes of Family Guy, then I’m pretty convinced that we’re already on the downward slope and the damage is already done without the A.I. Thank you television and social media for advancing society’s intellect in the wrong direction. In any case, I’d recommend you start off by doing more reading and writing to improve your own abilities. This will likely take time and a concerted effort on your part, but the reality is that the more you read, particularly on a topic that you’re passionate about; the better you’ll be able to distinguish between good writing and bad writing. The same goes for practicing writing. The more you write; the better your writing will become.
In the end, there’s nothing to stop anyone from using A.I.-generated writing. However, good scholarship comes from human intelligence; not a machine. No amount of A.I. is going to save your writing if you can’t come up with something better. While an A.I. can come up with answers to any number of questions much more quickly than a human can, the really interesting answers come from asking the really interesting questions. A machine cannot, and should not, replace our natural human curiosity.
Should We Provide Attribution to A.I.-Generated Content?
There’s a debate as to how we should provide attribution to A.I. content. After all, A.I. can write papers, assist with photo editing, or even create digital art, but even in non-academic settings, who should get the credit and how should we critique it? Perhaps the writer, photographer, or artist took the A.I. material and modified it with their own ideas. Does X% of the content still belong to the A.I.? Many writers and artists are addressing this conundrum by making it clear when the content is wholly A.I.-generated and when they used an A.I. to help them create their content; others…not at all. (I already addressed the issue of plagiarism and A.I. in part two.) In academic and artistic fields, it’s probably a good idea to provide attribution for A.I. content, even if the A.I. only created part of the work. It never hurts to practice due diligence, and it may save you from further trouble in the future, should laws or regulations change. In other fields, it likely won’t be necessary.
Apart from the benefits and drawbacks of A.I., perhaps the biggest question to ask is: Does A.I.-generated content really matter that much, at least currently? As others have pointed out, the lack of attribution of material is already widespread in many fields outside of education, from the use of ghostwriters and political speechwriters to advertising and company e-mails; not to mention every other news article and blog out there that doesn’t include citations and is fairly limited in attributions. These types of writing simply aren’t held to the same standards of academic writing and scholarship because there’s almost no need to. After all, not everything is getting peer-reviewed and published in scholarly journals or by a university press. Not everyone is interested in the source of factual information as long as it’s plausible. Artificial intelligence may simply be a new tool for many writers, publishers, and artists. In fact, as A.I. gradually improves and becomes more life-like, there is a concern that it will eventually replace the need for many of these types of writing jobs.
Final Thoughts
Throughout this three-part series, I’ve examined the ability of an A.I. chatbot to write essays, contemplated the repercussions of using A.I. to cheat in an academic setting, and theorized about the utility of A.I. in our lives.
We’re currently seeing a lot of investment in A.I. technology by the information technology sector. It seems like every time I turn around, some company is jumping on the A.I. bandwagon. Microsoft has heavily invested in A.I. already. Google is developing its own A.I. and plans to integrate it into its suite of apps. The application of A.I. to any number of different fields seems to be increasing because it’s a hot commodity. Artificial intelligence has tremendous potential in its value to society and as an inspiration for those struggling to find it. The applications are nearly limitless, but the extent of those applications and the amount of control we give A.I. is still being debated. To be sure, the information in this article will rapidly go out of date, and I have no illusions about the definitiveness of these thoughts.
Truth be told, I’ve absolutely no idea how A.I. will impact our lives in the future. Perhaps it’ll propel technology in new directions, or maybe it’ll become so integrated into everyday technology that it becomes very mundane. Currently, since many devices and computer programs already use fairly simple A.I. routines and algorithms, I speculate that it’s headed for the latter.
The decision to use A.I. for whatever purpose is up to you. Personally, I’m done trying to change people’s behavior. Students will use A.I. to cheat on their papers, overzealous entrepreneurs will try to use it to “revolutionize” their industries, governments will leverage it for decision-making (either good or bad), and science fiction writers will portend that A.I. will be the end of mankind. I’m not a Luddite or a technophobe, but I do feel that A.I. is merely a tool, and like any tool, it must be used with discipline and care. Failure to do so could result in consequences…whatever they may be.
Notes
1. Wayne Hughes and Robert Girrier, Fleet Tactics and Naval Operations (3rd Ed.) (Annapolis, MD: Naval Institute Press, 2018), 255 – 256.
2. Matt O’Brien, “Is Bing too belligerent? Microsoft looks to tame AI chatbot,” sfgate.com, SFGate, February 16, 2023, https://www.sfgate.com/business/article/is-bing-too-belligerent-microsoft-looks-to-tame-17789289.php.
3. Matt O’Brien, “Is Bing too belligerent? Microsoft looks to tame AI chatbot,” sfgate.com, SFGate, February 16, 2023, https://www.sfgate.com/business/article/is-bing-too-belligerent-microsoft-looks-to-tame-17789289.php.
4. Tom Acres, “Microsoft limits new Bing after reports of bizarre answers – with journalist ‘compared to Hitler’,” news.sky.com, Sky News, February 18, 2023, https://news.sky.com/story/microsoft-limits-new-bing-after-reports-of-bizarre-answers-with-journalist-compared-to-hitler-12813741.
5. Kelvin Chan, “What can ChatGPT maker’s new AI model GPT-4 do?,” apnews.com, AP, March 15, 2023, https://apnews.com/article/chatgpt-gpt4-artificial-intelligence-chatbots-307e867e3fe4464be9c4f884909f3977.
Bibliography
Acres, Tom. “Microsoft limits new Bing after reports of bizarre answers – with journalist ‘compared to Hitler’.” news.sky.com. Sky News, February 18, 2023. https://news.sky.com/story/microsoft-limits-new-bing-after-reports-of-bizarre-answers-with-journalist-compared-to-hitler-12813741.
Chan, Kelvin. “What can ChatGPT maker’s new AI model GPT-4 do?” apnews.com. AP, March 15, 2023. https://apnews.com/article/chatgpt-gpt4-artificial-intelligence-chatbots-307e867e3fe4464be9c4f884909f3977.
Hughes, Wayne, and Robert Girrier. Fleet Tactics and Naval Operations (3rd Ed.). Annapolis, MD: Naval Institute Press, 2018.
O’Brien, Matt. “Is Bing too belligerent? Microsoft looks to tame AI chatbot.” sfgate.com. SFGate, February 16, 2023. https://www.sfgate.com/business/article/is-bing-too-belligerent-microsoft-looks-to-tame-17789289.php.
Good article with some very important points. I think AI generated content for school or work deserves acceptance. It reduces time spent on projects and tasks. However, people should give attribution to AI generated content, no different than citing a source to avoid plagiarism. Similarly, people still need to add the human touch to improve the quality of AI generated content.
LikeLike