Determining Authenticity in the Era of ChatGPT
Shin Minsong is an undergraduate in English Linguistics and Language Technology, Hankuk University of Foreign Studies.
Lee Jiwon is an undergraduate in English Linguistics and Language Technology, Hankuk University of Foreign Studies.
Jeon Juyeon is an undergraduate in English Linguistics and Language Technology, Hankuk University of Foreign Studies.
Max Watson is a professor in English Linguistics and Language Technology, Hankuk University of Foreign Studies. He led the student authors of this paper which was presented at their university’s annual College of English Academic Conference. The team won first place.
Abstract
This study investigates the ability of native-speaking English professors (L1EP) and non-native English-speaking students (L2ES) to distinguish between essays generated by ChatGPT and those written by students. The research concentrates on the differences in assessment criteria and the proficiency of individuals in identifying GenAI content. While ChatGPT can aid students in writing assignments, its limitations include a lack of emotional depth and critical thinking skills. Moreover, concerns arise about the potential negative consequences of its usage in education. The study's methodology employs quantitative research methods to collect and analyze data, focusing on language skills and depth of professional knowledge. Results reveal that L1EP outperformed L2ES in identifying GenAI essays, across all categories of coherence, style, depth of analysis, and credibility. The study finds that beyond mere familiarity with ChatGPT, accurate differentiation between human and AI writing requires a deeper understanding and proficiency in language and assessment criteria.
Introduction
ChatGPT is an AI-powered language model developed by OpenAI, capable of generating human-like text based on context and past conversations (Hullapa, 2023). The underlying principle of ChatGPT is based on a deep learning model which utilizes a vast amount of text data to understand and generate human-like text (Lazzeri, 2023). It analyzes the input it receives and generates responses based on the patterns and information it has learned from its training data. The introduction of advanced AI models like ChatGPT has both positive and negative impacts on education (Sharma & Yadav, 2022). Positively, AI can provide personalized learning experiences, assist with grading, and offer educational resources to a broader audience. It can automate administrative tasks, freeing up time for educators to focus on teaching. AI-driven chatbots and virtual tutors are believed to provide students with immediate help and support. Negatively, concerns have been raised about the potential job displacement of educators, issues related to data privacy, over-reliance on AI, plagiarism, and students’ loss of authentic production. Educator reaction to student usage of ChatGPT varies widely. Some educators accept and support the usage of ChatGPT. They embrace it for its potential to enhance learning and writing skills. They even encourage students to use AI tools responsibly to improve their writing. On the other hand, some educators are cautious and prefer to maintain traditional teaching and evaluation methods. These educators are often concerned about plagiarism and the decreasing writing skills of students (Yu et al., 2023).
This study aims to compare written content produced by ChatGPT with that of students. It seeks to explore the specific strengths of ChatGPT in the context of writing, and how it can be leveraged. Additionally, the study will assess the ability of professor and university students to classify human-written and AI-written content in comparison to their language skills. This research hopes to discover the abilities of humans to determine authenticity, based on what they had earned while living as human beings, such as their language skills, their experiences and their depth of professional knowledge, etc.
Research questions:
-
Do native English-speaking professors (L1EP) correctly identify authorship of essays as being either GenAI or by human student writers, and do they do this better than non-native English-speaking students (L2ES)?
-
Are there differences in rating bands within a group? (i.e., is one band more accurately attributed to the correct group than others?)
-
Are there differences in the rating bands between groups?
Literature review
Since the introduction of ChatGPT, an AI-driven chatbot tool, there have been a multitude of differing opinions of scholars about its appropriate application and its limitations. The function of the tool is evident in that it furnishes users with high-quality content, assists in generating innovative ideas for academic papers, and identifies grammatical errors. It helps authors write their documents efficiently and quickly, with appropriate language or wording. However, it is debatable whether it can replace human-generated content in the writing field. ChatGPT’s limitation lies in the lack of emotional depth and life experiences that contribute to an individual voice, identity, and distinctiveness in writing. Additionally, it is limited in its ability to critically reflect on its own writing and assess the accuracy of generated content. It also lacks a deep understanding of complex concepts that require higher-level thinking skills, contextualization, and common sense (Barrot, 2023). Although it can seemingly copy the skills of humans with various data it collects from internet sources, scholars claim that it will be limited to imitation.
Although it cannot completely replace human authorship abilities, ChatGPT often becomes a useful tool for students’ writing assignments. The portability and accessibility of ChatGPT allows students to engage in it whenever and wherever they want, optimizing their learning experience. In addition, it notably saves students time by providing quick answers to their questions and requests. The proficiency of ChatGPT to understand human language makes it very easy for them to write creatively, ranging from poems, short stories, novels, or other types of writing such as an informative essay. Its capability to generate human-like responses further offers students assistance on specific subjects during the class or get help with assignments, making ChatGPT indispensable for academic support. On the other hand, there are demanding tasks and challenges faced by educators in adapting to the increasing use of AI. Amidst its technological appeal, there have been a few potential negative consequences in some areas. This highlights concerns about threatening academic motivation, integrity, and creativity, bringing biased, misleading content, or letting them lose their initiative and curiosity when they use the ChatGPT as a substitute for their learning (Zapata, 2023). Despite the integration of ChatGPT into the educational sphere significantly facilitating student work, the contraposition reveals concerns regarding academic motivation.
While it is true that ChatGPT is a good helper with student writing, generating writing prompts and providing feedback, further research is needed to evaluate the effectiveness of it in improving student writing skills (Abdullayeva, 2023). The whole education system in writing can be transformed depending on the demonstration of this capability. Based on their experience of using it, there are scholars who advocate the integration of ChatGPT into the educational ecosystem. The tool can create educational content, becoming an important content source for educators. Some professors think that letting students use AI tools is an effective way to make students better master knowledge and improve learning efficiency (Stephens, 2023; Villasenor, 2023). However, it raises questions about the appropriateness of using ChatGPT, considering its deficiencies. It can produce low quality data, lacks adequate privacy protection, and there are ethical concerns. Considering that education produces an emotional connection between teachers, students, and material in the learning process to achieve academic success, ChatGPT cannot facilitate this level of direct interaction. Also, it cannot capture the nuances and learning styles of students which may be different for each (Muhammad, 2023). It is important to teach students how to use these technologies correctly and effectively, to ensure that the learning process is meaningful. Balancing the freedom to use these tools with ethical consideration and effective learning experience is needed to make students responsible for their writing (Villasenor, 2023).
As a solution for students who are irresponsible for their writing, AI detectors were introduced (Pitcher, 2023). These are tools designed to detect when text was partially or entirely generated by ChatGPT. A 15-person company co-founded by recent Princeton graduate Edward Tian, introduced the software tool GPTZero, aimed at both educators and students. Tian’s primary goal of developing this program is to facilitate collaboration between educators and students, thereby fostering responsible AI usage (Emma, 2023). As well, the American Federation of Teachers (AFT) announced they recently established a partnership with GPTZero to integrate it into the classroom. Nevertheless, in terms of its usage, an issue has emerged: the detectors are not consistently reliable. One recent empirical case demonstrated the problem; a section of the US Constitution was fed to GPTZero, which concluded it was likely to be written entirely by AI. This inspired confusion and plenty of jokes. In explanation, Mr. Tian explained that GPTZero often uses the US Constitution and similar frequently used texts in their training data. Consequently, they are trained to produce text resembling the Constitution. GPTZero predicts such text generation, leading to this intriguing phenomenon. More importantly, the reliability of the detectors is inconsistent when the real writer is not a native English speaker (Liang et al., 2023). These findings suggest that while various AIs are integrated into everyday life, it raises questions about whether humans can fully rely on them, especially in critical and nuanced applications.
AI detectors, such as GPTZero, learn and analyze text to identify texts used by AI, but this is imperfect. In a 2023 study by Gao et al., they gathered ten research abstracts from five high impact factor medical journals and asked ChatGPT to generate research abstracts based on their titles and journals. They evaluated the abstracts using an AI output detector, plagiarism detector, and had blind human reviewers try to distinguish whether abstracts were original or generated. Most generated abstracts were detected using the AI output detector. Blinded human reviewers correctly identified 68% of GenAI abstracts, but incorrectly identified 14% of original abstracts as being GenAI. Reviewers indicated that it was surprisingly difficult to differentiate between the two, but that the GenAI abstracts were vague and had a formulaic feel to the writing. They suggest clear disclosure when a manuscript is written with assistance from large language models such as ChatGPT. Reassuringly, there are patterns that allow GenAI text to be detected by AI output detectors, although there has been exploration of techniques to fool AI output detectors. Though imperfect, AI output detectors may be one tool to include in the research editorial process.
Methodology
Quantitative research methods were employed to systematically acquire, analyze, and objectively interpret numerical data. Quantitative approaches are characterized by their emphasis on quantifiable data, rendering them particularly suitable for examining causal relationships and behavioral trends across academic domains.
Data collection overview
Our survey instrument was constructed using LimeSurvey, a free and open-source statistical survey web app. This platform offers essential statistical functions and graphical representations for survey outcomes. Our analysis involved separated data based on defined group features and assessment criteria for essays. The experiment assesses the quality of work generated by ChatGPT and student writers with the following criteria:
Coherence: The human-like flow and organization of ideas.
Style: The appropriateness and variance of language, including vocabulary.
Depth of analysis: The level of critical thinking demonstrated.
Credibility of data: The reliability and trustworthiness of the information.
Each participant received five randomly assigned essays drawn from a pool of 16 writings, consisting of eight diagnostic essays written by undergraduate students before ChatGPT was available, and eight generated by ChatGPT. The GenAI essays were produced with the same predetermined prompts that the students had written from, detailed in the appendix. At the bottom of each essay, an array of four categories were presented with clickable bubbles of the four judgment criteria: “human-like,” “uncertain,” “AI-like,” and “no answer.” “No-answer” was filled out by default to ensure accuracy in data collection.
Participants
Participants were recruited via email, Canvas LMS, Facebook groups, and SNS direct messages. A reward of a Starbucks Americano was promised to the first 10 to complete the survey, and another 10 were given as random prizes for the rest. 221 surveys were attempted, with 102 full responses received. Evaluation of the writing samples was performed by two groups: professors in Korea and undergraduates at Hankuk University of Foreign Studies. Professor participants were categorized into two groups based on their language background: those who speak English as a L1 and those who speak it as a L2.
Table 1. Group classification for the survey and each group’s respondents.
Group |
Professors |
Students |
|
Language Background |
L1 English (L1EP) |
L2 English (L2EP) |
L2 English (L2ES) |
Respondents |
37 |
7 |
49 |
In the Professors group, there were 37 respondents with English as a L1, and seven with English as a L2 who completed surveys. In the Students group, 49 with English as a L2 completed surveys. Due to the low sample size of L2EP, chi-square tests were not possible. Their responses are provided in table 2 for curiosity.
Results
For research question #1, native-speaking professors (L1EP) correctly identified the authorship of essays as being either GenAI or by human student writers. This was demonstrated across the entire spectrum of criteria: coherence, style, depth of analysis, and credibility of the data. Raters were more successful identifying human authors better than AI authors. L1EP also more likely to identify correct authorship better than non-native English-speaking students (L2ES) for all criteria for both author types.
For research question #2, there were differences in the criteria bands by the native-speaking professors (L1EP). A chi-square test of independence of the “style” criteria was most significant, X2 (2, N = 185) = 49.5, p < .001. L1EP can accurately identify writing styles that are human-like vs. AI-like.
L1EP are also accurate in determining “credibility” between the two authors, X2 (2, N = 172) = 38.9, p < .001. L1EP correctly judged the difference in “depth” between the two, X2 (2, N = 179) = 33.5, p < .001. L1EP were also successful in considering differences in “coherence” between sources, X2 (2, N = 179) = 31.8, p < .001.
For research question #3, there were significant differences in rating bands between the native-speaking professors (L1EP) and non-native English-speaking students (L2ES). For the “coherence” criterion, a chi-square test of independence showed that L2ES were unable to determine a difference between human and AI writing, X2 (2, N = 237) = 3.0, p = .22.
L2ES were also unable to identify differences in “style” between the two sources, X2 (2, N = 238) = 1.7, p = .44. L2ES could not ascertain variation in “depth”, X2 (2, N = 232) = 1.0, p = .60. L2ES did not spot differences in credibility, X2 (2, N = 232) = 0.5, p = .80. There were further differences between the L1EP and L2ES in the “uncertain” bands.
Table 2. Frequency counts for assignment of authorship, Human or GenAI, by three categories of raters |
||||||||||
L1EP ratings |
L2ES ratings (non-native English students) |
L2EP ratings (Non-native English profs.) |
||||||||
Item |
True author |
Human -like |
Un- |
AI- like |
Human -like |
Un- |
AI -like |
Human -like |
Un- |
AI -like |
Coherence |
Human |
77 (86%) |
5 |
13 (14%) |
76 (72%) |
14 |
30 (28%) |
14 (82%) |
2 |
3 (18%) |
GenAI |
34 (45%) |
9 |
41 (55%) |
66 (62%) |
10 |
41 (38%) |
4 (31%) |
2 |
9 (69%) |
|
Style |
Human |
75 (86%) |
10 |
12 (14%) |
73 (66%) |
11 |
37 (34%) |
15 (83%) |
1 |
3 (17%) |
GenAI |
27 (33%) |
7 |
54 (67%) |
63 (58%) |
9 |
45 (42%) |
3 (21%) |
1 |
11 (79%) |
|
Depth |
Human |
66 (83%) |
15 |
14 (18%) |
65 (64%) |
16 |
37 (36%) |
12 (80%) |
4 |
3 (20%) |
GenAI |
27 (37%) |
11 |
46 (63%) |
59 (63%) |
21 |
34 (37%) |
3 (25%) |
3 |
9 (75%) |
|
Credibility |
Human |
55 (86%) |
26 |
9 (14%) |
60 (64%) |
24 |
34 (36%) |
9 (82%) |
8 |
2 (18%) |
GenAI |
23 (34%) |
15 |
44 (66%) |
53 (60%) |
26 |
35 (40%) |
3 (30%) |
5 |
7 (70%) |
* Shaded cells indicate correct identification of author
Discussion
This paper examines the criteria used by professors and students to determine if text has been written by either a human or AI. A random sampling of human student-written work and GenAI work presented to native-speaking professors (L1EP) and non-native English-speaking students (L2ES) showed that L1EP were able to correctly distinguish between the two sources, while L2ES were unable.
In evaluating text authenticity, the L1EP group were most accurate on judging “style”. This was supported by comments from the L1EP raters that GenAI text often has an “odd explanatory tone” that is unlike a student author and appears “too cookie-cutter and without error” to be authentic work. Cooper (2023) remarked that the creators of ChatGPT accept that the model has its weaknesses and does engage in awkward writing at times, noting issues with text synthesis tasks such as repetitions, contradictions, and coherence loss over long passages. Also, Vincent (2022) observed that ChatGPT lacks hard-coded rules for how certain systems in the world operate, leading to their propensity to generate fluent gibberish (Vincent, 2022).
The L1EP group’s most significant difficulty was in identifying “coherence” in GenAI papers, with a 55% success rate. Respondents in both groups were significantly less successful in correctly identifying GenAI papers.
While it may be suggested that the L2ES are gaining considerable experience with ChatGPT in day-to-day usage for assisting with their assignments (“One Third of College Students”, 2023), this alone did not appear to be enough for them to accurately identify and differentiate between student and GenAI work. This lack of success could be attributed to English proficiency level, given that L1EP, and by extension L2EP, have a significant advantage in English language familiarity. This can also be attributed to inexperience in rating others’ essays.
In conclusion, the research questions were successfully answered. Our study shows that native-speaking professors (L1EP) are superior in terms of evaluating text to determine AI or human authorship compared to non-native English-speaking students (L2ES). Also, though adequate education and experience, L1ES have the potential to progress to an equal-or-better level. Repeating the study using human-authored papers corrected for mechanical errors with simple technology could result in lower correct recognition. Additionally, further studies incorporating pre- and post-testing with an AI-identification treatment are encouraged. It is suggested that it takes more than familiarity with ChatGPT to ascertain the difference between human and AI writing.
References
Abdullayeva, M., & Musayeva, Z. M. (2023). The Impact of ChatGPT on Student's Writing Skills: An Exploration of Ai-Assisted Writing Tools. In International Conference of Education, Research and Innovation (Vol. 1, No. 4, pp. 61-66). https://zenodo.org/records/7876800
Barrot, J. S. (2023). Using ChatGPT for second language writing: Pitfalls and potentials. Assessing Writing, 57, 100745. https://www.sciencedirect.com/science/article/abs/pii/S1075293523000533
Benj, E. (2023). Why AI detectors think US constitution was written by AI. Ars Technica. https://arstechnica.com/information-technology/2023/07/why-ai-detectors-think-the-us-constitution-was-written-by-ai/
Cooper, K. (2023). OpenAI GPT-3: Everything you need to know. Springboard. https://www.springboard.com/blog/data-science/machine-learning-gpt-3-open-ai/.
Emma, B (2023). A college student created an app that can tell whether AI wrote an essay.: NPR. https://www.npr.org/2023/01/09/1147549845/gptzero-ai-chatgpt-edward-tian-plagiarism
Gao, C. A., Howard, F. M., Markov, N. S., Dyer, E. C., Ramesh, S., Luo, Y., & Pearson, A. T. (2023). Comparing scientific abstracts generated by ChatGPT to real abstracts with detectors and blinded human reviewers. NPJ Digital Medicine, 6(1), 75. https://doi.org/10.1038/s41746-023-00819-6
Huallpa, J. J. (2023). Exploring the ethical considerations of using ChatGPT in university education. Periodicals of Engineering and Natural Sciences, 11(4), 105-115. https://www.researchgate.net/profile/Luis-Chauca-Huete/publication/373949976_Exploring_the_ethical_considerations_of_using_Chat_GPT_in_university_education/links/6504f02a9fdf0c69dfd0553f/Exploring-the-ethical-considerations-of-using-Chat-GPT-in-university-education.pdf
Johnson, D., Goodman, R., Patrinely, J., Stone, C., Zimmerman, E., Donald, R., ... & Wheless, L. (2023). Assessing the accuracy and reliability of AI-generated medical responses: an evaluation of the Chat-GPT model. Research square. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10002821/
Lazzeri, F. (2023). Generative AI, OpenAI, and ChatGPT: What are they?. Medium. https://medium.com/data-science-at-microsoft/generative-ai-openai-and-chatgpt-what-are-they-3c80397062c4
Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. arXiv preprint arXiv:2304.02819. https://arxiv.org/abs/2304.02819
Ma, Y., Liu, J., Yi, F., Cheng, Q., Huang, Y., Lu, W., & Liu, X. (2023). AI vs. human–differentiation analysis of scientific content generation. arXiv preprint arXiv, 2301, 10416. https://doi.org/10.48550/arXiv.2301.10416
One-Third of College Students Used ChatGPT for School Work During The 2022-23 Academic Year. (2023, September). Intelligent. https://www.intelligent.com/one-third-of-college-students-used-chatgpt-for-schoolwork-during-the-2022-23-academic-year/
Pitchure, A. (2023). Michigan teachers adapt to A.I. in classrooms with limited state guidance. https://wwmt.com/news/local/artificial-intelligence-ai-teachers-classroom-students-usage-guidance-guidlines-chatgpt-detection-detectors
Sadasivan, V. S., Kumar, A., Balasubramanian, S., Wang, W., & Feizi, S. (2023). Can AI-generated text be reliably detected?. arXiv preprint arXiv:2303.11156. https://arxiv.org/abs/2303.11156
Sharma, S., & Yadav, R. (2022). ChatGPT–A Technological Remedy or Challenge for Education System. Global Journal of Enterprise Information System, 14(4), 46-51. https://www.gjeis.com/index.php/GJEIS/article/view/698
Shen, Y. (2014). On establishing the writer's credibility in persuasive writing. Theory and Practice in Language Studies, 4(7), 1511. https://www.academypublication.com/issues/past/tpls/vol04/07/28.pdf
Stephens, A. E. J. (2023). A Mixed-Method Triangular Approach to Best Practices in Combating Plagiarism and Impersonation in Online Bachelor’s Degree Programs (Doctoral dissertation, Marshall University). https://mds.marshall.edu/cgi/viewcontent.cgi?article=2760&context=etd
Villasenor, J. (2023). The problems with a moratorium on training large AI systems. https://policycommons.net/artifacts/4140617/the-problems-with-a-moratorium-on-training-large-ai-systems/4949274/
Vincent, J. (2022). AI-generated answers temporarily banned on coding Q&A site Stack Overflow. The Verge, 5. https://www.theverge.com/2022/12/5/23493932/chatgpt-ai-generated-answers-temporarily-banned-stack-overflow-llms-dangers
Yu, H. (2023). Reflection on whether Chat GPT should be banned by academia from the perspective of education and teaching. Frontiers in Psychology, 14, 1181712. https://www.frontiersin.org/articles/10.3389/fpsyg.2023.1181712/full
Zapata, I. D. L. T., Heinzel, T., & Bernabei, R. (2023). AI Logic of Care: premises for upgrading the smart bandages for diabetic chronic wounds. https://dl.designresearchsociety.org/iasdr/iasdr2023/fullpapers/71/
Appendix: Distinction between AI and Human-generated Text
Four criteria
A-1 Coherence
Beyond writings by AI and humans, there exist numerous syntactic distinctions that allow us to distinguish between AI and human-generated texts. Notably, recent advancements in text generation based on pre-trained language models have significantly improved text coherence (Ma et al., 2023). While GenAI texts have made great progress in this regard, they are not flawless when it comes to coherence and consistency (Ma et al., 2023). This discrepancy is evident in Table 2, where the evaluation of human-generated text ranks higher in terms of coherence. GenAI texts often demonstrate a high level of coherence including consistent terminology usage. Nevertheless, they produce seemingly coherent sentences that lack understanding, bringing off-topic content or excessive technical jargon. AI is not capable of understanding the context of the content as deeply as human can.
A-2 Style
A style gap emerges when comparing GenAI texts with human-written texts. In evaluating text authenticity, as mentioned in the discussion section, the L1EP group heavily considered the factor of “style”. The texts generated by ChatGPT exhibit distinct limitations related to style, such as formulaic structure, repetitions, and contradictions. For instance, GenAI text often features shorter sentences. The sentences are cut off a lot. As highlighted by the L1EP group, GenAI text frequently displays “odd explanatory tone”, characterized by a polished and error-free appearance. AI models, including ChatGPT, are trained on specific datasets, which may introduce biases and tendencies into their writing. This can lead to the repeated use of certain expressions and patterns, further distinguishing from human-generated content. The style gap between AI and human-generated texts encompasses elements such as formulaic structures, sentence lengths and apparent explanatory tone. These contribute to the differentiation between AI and human-authored content and underscore the importance of style in assessing the authenticity of text.
A-3 Depth
In the context of differentiation between human and AI-written texts, “depth” refers to the level of comprehensive understanding, richness of content, and the capacity to provide nuanced insights, and critical analysis. A text with depth is one that goes beyond surface-level information, giving a deeper, more profound exploration of the subject. In contrast, content without depth is just characterized by superficial understanding, a limited scope, or an inability to provide in-depth analysis or viewpoints. Depth is a crucial factor in assessing the authenticity of the texts and there is still a gap in depth between AI and human generated content. It still has a gap in terms of depth between the texts of AI and humans (Ma et al., 2023).
A-4 Credibility
Credibility refers to the degree to which the audience considers the writer believable, or simply put, what the audience thinks of the writer. According to Shen Y (2014), credibility includes three core dimensions: expertise, which refers to the knowledge or ability ascribed to the writer; trustworthiness, which refers to the writer’s perceived honesty, character, and safety; and goodwill, which means that the writer has the audience’s benefit at heart, shows understanding of others’ ideas, and is empathic toward his or her audience’s problems. Therefore, credibility is a significant criterion because it not only decides the quality of the whole writing but shows the writer on the surface. In the case of Chat GPT, it often produces seemingly dependable, but incorrect evidence as a study on Chat GPT’s limitation of writing medical literature shows (Johnson, 2023).
Essay prompts
Write a 300-350 word essay about one of the following: (the number of human-generated essays / number of AI essays)
Topics
-
Write an essay in the form of a letter to your parents explaining why you need more/less independence, or why you are the way you are. (1/1)
-
Tell a story about an experience in your childhood from which you learned something, explaining what you learned. (1/2)
-
Beauty is a word that brings different images to the minds of different people. Some people focus on external attributes that make people attractive on the outside, while others focus on internal aspects of character and personality. What makes people beautiful to you? (2/2)
-
Personal advertisements for dates, dating services, and internet chat lines are just a few examples that show how many people are lonely. Some people might meet someone through services such as these, while others may not. What can people who are lonely do to make their lives fulfilling? (1/1)
-
Phone addiction is a serious modern problem. Explain how you've fought with your own addiction and the steps you've taken to reduce it. (2/1)
-
Many students have expectations of college that differ from reality. What advice would you give to incoming freshmen? (1/1)
Guidelines
-
Before beginning your essay, remember that your main audience is your peers.
-
How might you best convince your audience to take up your cause without alienating them, especially if they already disagree with your stance?
-
What kind of evidence might you need to present in order to show that you are knowledgeable about your subject matter?
-
What kind of solution might you need to provide in order to appear as though you are offering answers to the problem you have identified?
Please check the Pilgrims in Segovia Teacher Training courses 2025 at Pilgrims website.
Please check the Pilgrims f2f courses at Pilgrims website.
Improving EFL University Courses Through Practical Techniques from The Science of Learning
Hall Houston, TaiwanEffects of Task-Based Language Teaching on Students’ Grammar Acquisition at Van Lang University
Tran Thi Thanh Mai, VietnamDetermining Authenticity in the Era of ChatGPT
Lee Jiwon, South Korea;Shin Minsong, South Korea;Jeon Juyeon, South Korea;Max Watson, South Korea