2.4 Using and Choosing GenAI Tools
It’s difficult to know where to begin when it comes to genAI applications. You might be wondering about how to use the tools and how to choose the best tools to suit your purposes. This chapter will help you get started on using genAI applications and LLMs effectively.
Using GenAI Tools
GenAI models have evolved considerably since their initial release and new ones as well as AI assistants are being developed on an ongoing basis in commercial and open source arenas. The capabilities are also expanding: Developers can now create organized systems of AI agents wherein several AI models or agents can take on specific roles then interact autonomously with each other to achieve a common goal. Regardless of the structure, good direction will result in effective outcomes.
Whereas initially you had to use text to command the LLM applications, now you may choose to use voice, text, images, camera, data, and PDF files in your prompting strategy, though capabilities will vary from one model to another. (For example at this writing, Mistral/Mixtral, offers only text-bases functions.) In addition, genAI models have been trained using natural language processing to interact with you using a conversational approach, allowing you to engage with them in a way that is as familiar as speaking to any human.
Given the general public’s lack of familiarity with the technology itself and the implications of its use on our own work, on organizations, and on society, it’s best to proceed with courage, curiosity, and caution. The following offers some general tips on how to use genAI applications along with an overview of selected tools that would be useful in technical contexts.
Tips for Using GenAI
With some genAI models, you can direct the model to use a specific tone: friendly, casual, professional, technical, and the like. In the case of ChatGPT’s 4o LLM, you can select a male or female voice with a specific tone (casual or more serious) for verbal (and camera) engagement. Selecting the tone in the features or requesting it in your prompt will affect the vocabulary, syntax, and content of the output. It will also affect the manner in which the LLM interacts with you. In fact, you will find that LLMs often sound very much like humans, and can exhibit unexpected human characteristics such as breath pauses, spontaneity, and chuckles. Most times, and depending on the model, it will tend to respond in a manner in which it is prompted to respond, so the general rule for focused interaction in technical contexts is to be polite and professional. Your degree of politeness is significant. Researchers have discovered that the use moderate politeness in prompting will result in better outcomes and that this approach applies to differences based on cultural contexts and languages (Yin et al., n.d.). Besides, when using politeness, your interaction with the application will be that much more pleasant.
While you can certainly use LLMs in a casual manner by simply inputting a question and seeing what comes up in the output (then following up if necessary), in the workplace we tend to have limited time, a specific purpose, and contextual information we want to see in the output, so taking a more deliberate, problem solving approach in the use of LLMs will result in usable output and time saved.
The following are tips, organized according to a typical workflow, aim to help you use genAI applications in your day-to-day professional tasks such as email, report, and presentation drafting. Be sure to work within the requirements and constraints of the organization you are representing. Remember that you should always consider genAI outputs, whether they are charts, text, or images, as draft material to be revised and supplemented and that you should always declare your use of genAI content. Doing so will sustain your credibility as a reliable communicator.
Before Beginning
Verbal or text-based conversational prompting is the most effective way to draft typical workplace documents including technical materials, routine correspondence, reports, and presentations. This method can also be used for interactions that are of a more exploratory or inquisitive in nature. Assessing your goals and situational needs and constraints will help you develop a game plan for using the LLM to advantage. The amount of preparation will be determined by how complex your request and expected output will be. Of course, in situations requiring only brief one-time interactions for simple tasks, not so much planning would be required.
- Consider your audience’s needs and the context: Completing thorough audience and context analyses before beginning will help you to narrow the focus of your work, address the needs of the audience, and adapt your prompting strategy to the situation at hand.
- Identify your objectives: Clearly articulate your goal, and reflect on how the LLM can assist you in achieving it. For example, your objective may be to write a proposal to persuade a client to accept a plan for the installation of a fire suppression system. Ask yourself: Are you responding to a request, analyzing data, addressing an opportunity or problem? If so, complete a thorough problem analysis to narrow the focus of your interaction with the LLM. Are you creating a proposal, sending correspondence, creating a presentation or video? It might help at this stage to think of achieving your goal in a step by step manner in order to plan an efficient research, drafting, and revision strategy and determine how the model can assist at each stage.
- Determine the types of output you are seeking: Tell the LLM what you expect it to produce: “Draft an outline for a proposal for the installation of a fire suppression system in a mid-sized low rise building.” Ask yourself: Are you requesting a document outline, a summary, a video or presentation, ideas for a topic, or a chart or graph that illustrates a point?
- Gather existing information and data: Collect the information and data you will be using to create the prompts. You will use whatever information you have available to inform and guide the LLM in focused tasks. Right now in most workplaces this is still done manually, but the process will soon be automated with the use of knowledge graphs. Other ways of using existing documents is to upload PDF files to the LLM, then providing instructions on how you want that document to be processed: summarized, analyzed, or otherwise used in the development of output.
- Select the genAI application: Choose the genAI model that will best achieve your goal by considering the following factors: modalities/capabilities, degree of reliability, cost, in-application tools, accessibility, and privacy policy. Many organizations will provide you with approved applications, so your choice of models will be limited to what is available. At Seneca, Copilot (connected to ChatGPT 4o and the internet) is available through Seneca accounts. In other contexts, you have great latitude in the choice of tools to use: open source or commercial, by subscription or free, by version, and you can choose based on capacity, features, ethical practices, capability, and the like.
- Align with your organization’s policies: Review your organization’s privacy, confidentiality, and other applicable policies, so you can work within its legal and ethical standards.
When Using GenAI
Once you have a plan, you can more efficiently develop a strategy for prompting the genAI application to offer the output that will align with your purpose.
- Develop a prompting strategy: Review prompting techniques (see the GenAI Prompting chapter). Does the task at hand lend to casual, conversational, or structured prompting? Choose the method that will best help to achieve your goal. Doing so will reduce the amount of time and the number of iterations required.
- Consider whether to use an “all in” or a phased approach: You may want to try inputting a prompt that encompasses the entire task, or you may choose a phased strategy that involves breaking up the task into sequential stages progressing from one request to the next. Again, think of what you want to achieve and decide on the best approach.
- Craft specific prompts: The more specific the prompts, the more likely the LLM or other application will provide outputs that match your needs. Use your planning data and information to create prompts that contain information about context, problem or need, criteria, goal, etc. The more information you provide regarding context and topic, the more useful the output.
- Treat prompting as an iterative process: Review the outputs and refine your prompts to achieve more precise, goal-oriented responses. The first output is usually not considered the best, so engage with the application in a conversational manner and ask it to refocus its work using additional prompting. If the the LLM does not produce the necessary output, consider using a different model.
Reviewing the Output
Reviewing the content, especially the technical information, created by the LLM is a critical step in ensuring that what you include in your final document is accurate, whole, and reflecting of your own and your organization’s values. While LLMs offer many advantages, their known hallucinations and lack of verifiable citations can make your final document questionable and in certain technical contexts inaccuracies can result in serious safety failures. When your reputation is on the line, opt for a careful review of the work you submit.
Use the following method to compete a scaffolded review of all LLM output before you make use of it in your communications:
- Corroborate information: Regardless of variations in conventions of use, you must be vigilant when reviewing LLM output. Some LLMs do not include citations, so you must check or corroborate all technical detail and assertions or claims, complete the research to confirm claims and information, revise arguments, and add the citations when they have not been included. Other models like MS Copilot and ChatGPT will include citations; however, the content of the output may be copied in whole or in part from a source or the citation may be altogether incorrect or incomplete–more so for Copilot. Remember, most readers are interested in your ideas not those of a bot, so avoid including large swaths of LLM content in your documents. Apply your skills as a communicator to add relevant content, edit for tone and style, and to make the content your own. Also, ethical citation and documentation standards must be applied to all work that you produce—with or without the assistance of genAI. You may find Mike Caulfield’s SIFT model shown in Figure 2.4.2 helpful in conducting your review.
The Four Moves of the SIFT Model–In brief and adapted for GenAI output
- Stop: Take a moment to glance through the genAI output to highlight claims that are and are not documented.
- Investigate the source: For claims that have not been documented, complete the research to verify the information and document the source. For claims that have citations, check the source material. Has the claim been copied with or without quotations? Are the citations accurate and complete?
- Find better coverage: If the claim does not accurately represent the idea you want to convey, search for a better source.
- Trace claims, quotes, and media to the original content: Ensure that the claim or quote is not taken out of context by going to the original source to understand the original context of use.
- Review the output: Remember that you are responsible for all LLM-produced content you include in your communications. Ensure the output you drop into your documents reflects the values of the organization you serve:
- Accuracy and relevance: Determine if the output aligns with your intended purpose and if the information is accurate. Some parts of the output may be on track, while others not so much so. Use a discerning eye to review all content. Note that LLMs are known to “hallucinate” or make up information, so be sure to also complete an accuracy check.
- Bias: LLMs sometimes create outputs that contain bias related to race, religion, ethnicity, socio-economic status, and gender. Biased assumptions are harmful to people and may be damaging to the company’s and your own reputation. Responsible use of genAI involves ensuring that the content that you use is inclusive.
- Sustainability principles: Companies recognize the importance of responding to issues such as climate change, global developments, income disparities, unemployment, and indigenous inequities. Ensure that the LLM output you use aligns with the organization’s sustainability policies and practices.
- Engage with the content: While the quality of LLM output has improved substantiality to the point where it is difficult to distinguish between machine-generated content and human text, remember that humans would very much prefer to read content that is, if not entirely created by humans, wholly reviewed and edited by them. Unfortunately, recent research (Dell’Acqua et al., 2023; Dell’Acqua, n.d.; Mollick, 2024) has revealed that in hybrid genAI/human collaboration, humans tend to disengage from the work and settle for outputs that are only just good enough without striving to improve on the output.Use LLM output as the basis for your own creation, and don’t settle until you have achieved authenticity, precision, completeness, and excellence. Incorporate information based on your own expertise, and edit for style and tone so that the LLM content you use reflects your own uniqueness. Remember genAI applications are only intended to augment your own knowledge and skills rather than to replace them.
- Document and declare your use of genAI content: Remember that ethical citation/declaration and documentation standards must be applied to all work that you produce—with or without the assistance of genAI. At Seneca, you are required to quote or declare genAI output. Check with your professors regarding their expectations. And you may wish to consult Seneca’s Guide on Citing and Documentation: Artificial Intelligence for specific detail on citation practices approved by the institution for documents you create for course-related work. In most workplace contexts, a declaration of use (model, mode, usage, date) would suffice.
Citing and Documenting GenAI Content (Seneca Libraries, 2023).
- Declaring Use: The following prompt was used with Copilot in Creative Mode on December 1, 2024: [insert the prompt]
- In-text Citation: When quoting or paraphrasing text culled from output, use this citation method: (Microsoft, 2024)
- Citing Images:
- Adobe Firefly, 2024 (https://firefly.adobe.com/)References
- Microsoft. (2024). Copilot [Large language model]. https://copilot.microsoft.comAdobe Firefly, 2024 (https://firefly.adobe.com/).For more information on citation, go to Unit 8 in this text and visit Artificial Intelligence – APA Citation Guide (APA 7th Edition) – LibGuides at Seneca Libraries.
When in the workplace do the following:
- Check on genAI declarations practices with the ethics officer or your immediate supervisor to learn about the genAI policies and practices there.
- Keep records of the prompts and outputs you have used to prepare your content.
- Recognize that you are responsible for the LLM output that you use in your business documents, correspondence, presentations, meetings, and so forth.
- Let your audience know how you have used LLMs; doing so will help to sustain trust with your audience. See the GenAI Use declaration below for an example.
Knowledge Check
Choosing GenAI Tools
A good place to start when choosing which genAI applications to use is to ask yourself the following questions:
- What am I trying to create: images, document, music, code, presentation, video?
- What do I want the genAI application to do for me: summarize, outline, code, research?
- What is the outcome that I want to achieve? Here, consider the degree of precision, accuracy, agility, resolution, reliability, etc. in the output you need as the different tools exhibit varying degrees of performance.
List of Selected Tools
Many thousands of AI applications are available, not to mention GPT AI assistants. The following is a list of selected genAI applications that can be used for technical communication purposes. These tools offer features that will help you to create content and documents, draft code, summarize documents, create presentations and videos, conduct research, create customized images, analyze data sets, and work in multimodal ways.
Note: At the time of writing, many items in the following list have not been approved by Seneca and are to be used only with your personal login credentials and at your own risk and responsibility. Seneca encourages you to make use of the approved Microsoft Copilot for academic purposes. Please see Seneca’s Generative Artificial Intelligence (GenAI) Policy for more information.
Chat Applications (Multipurpose Content Generators)
- ChatGPT, Claude, Gemini (formerly Bard), HuggingChat, Llama3, Mistral/Mixtral: Each of these models are natural language conversational agents that will create outputs based on prompts given. They each offer various features and capabilities. Their capabilities generally include content creation, summarization, data analysis, and image generation through text and/or voice prompting. Content created will display varying degrees of accuracy and precision, so the outputs are always considered draft material.
Coding Applications
- Advanced Data Analysis (formerly Code Interpreter): This OpenAI application will allow you to write and run Python code.
- Code Whisperer: Code Whisperer is Amazon’s code generator that suggests code in real time.
- Github Copilot: Github Copilot suggests code and functions in real time based on context in comments and previous code.
- Tabnine: Similar to Github Copilot, with a few differences including the claim to only train the models on open source code with licenses that permit such use.
Document Formatting and Design
- Canva: Canva offers templates for any number of communications including documents, presentations, infographics, cards, social media posts, banners, etc. With Magic Writer, the composition task is initiated but must be followed up with editing.
- Gamma: Gamma features beautiful formatting for documents, presentations, and webpages.
Productivity
- MS Copilot: With or without access to your organizational data, Copilot in the enterprise version of Microsoft can summarize meeting notes, help draft documents, and offer other prompt-based services. Copilot is available on Microsoft Office 365 with a purchased license and is also available for individual use for free or by subscription for the Pro version. Access to full capabilities is dependent on the administrator release of functions.
- Notion.so: An all-in-one productivity platform, with Notion.so you can write, collaborate, and plan projects using AI-powered technology.
- Otter.ai: Otter is a meeting assistant that can join, transcribe, summarize, and record meetings. It’s a great example of the types of meeting assistants now available.
Research
- Consensus: This AI search engine quickly goes through published research papers to find information based on your search parameters. Citations are provided.
- Elicit: Elicit searches through papers and citations then extracts and synthesizes key information according to your specified research focus. Best for scientific research.
- Perplexity: Perplexity is a search engine employing user-selected All Web, Academic, Reddit, YouTube, and Wolfram Alpha databases. It also offers writing capabilities without accessing the internet.
- Research Rabbit: Research Rabbit is a citation-based mapping tool that focuses on the relationships between research works. It uses visualizations to help researchers find similar papers and other researchers in their field.
Image Generation
- Adobe Firefly: Firefly is an Adobe product that generates images based on a text prompt. Its filters allow you to quickly and intuitively modify the initial image (e.g., re-creating the same image in a different style). It’s currently free to use, but requires you to create an account and sign in.
- DALL-E 3: DALL-E is OpenAI’ s image generation tool. DALL-E 3 will generate realistic images using voice and text prompting and includes inpainting (adding image elements), outpainting (removing image elements), and variations.
- Stable Diffusion: Stable Diffusion is a deep learning, text-to-image model released in 2022 based on diffusion techniques. It is primarily used to generate detailed images conditioned on text descriptions, though it can also be applied to other tasks such as inpainting, outpainting, and generating image-to-image translations guided by a text prompt.
Presentations
- Canva: Select the Presentations feature and use Magic Design to help you develop the content for your presentation in minutes.
- SlideGPT. SlideGPT is free to use to create presentations. Once the presentation has been generated you can download it as a Power Point file for a small fee.
- Tome: Tome generates presentations in minutes based on text provided or even a short prompt, complete with AI-generated images.
Video Creation
- Canva: Canva offers video creation tools that are powered by AI. Features include content and image commands that are created through Magic Media. This tool also includes talking heads.
- Sora: Released by OpenAI, Sora enables users to create realistic, near true-to-life videos using text commands.
Futurepedia is a good compendium site where you can search for free and premium genAI tools according to your specified purpose.
Knowledge Check
Attributions
Content for the section on genAI tools has been partially adapted and updated from:
Center for Faculty Development and Teaching Innovation. (2023). GenAI Tools – Generative Artificial Intelligence in Teaching and Learning (pressbooks.pub) Centennial College. CC by 4.0.
University of British Columbia (UBC). AI Tools. Tools – AI In Teaching and Learning (ubc.ca) CC by 4.0.
University of Georgetown. Artificial Intelligence Tools – cndls website (georgetown.edu)\ CC by 4.0
GenAI Use
Copilot (Creative Mode) was used by Robin L. Potter to ideate content for the section on Tips for Using GenAI using the following prompt: “Draft a set of tips for business professionals on the use of generative AI; include the following sections: Before You Begin; When Using GenAI; Reviewing the Output.” Related references have been included in the list below.
Chapter review exercises were also created with the assistance of CoPilot.
References
Adobe Experience Cloud Team. (2023, August 25). 5 tips for getting started with generative AI for your business (adobe.com)
Amazon. What is CodeWhisperer? – CodeWhisperer (amazon.com)
Caulfield, M. (2019, June 19). SIFT (The Four Moves) – Hapgood
Copilot. (2024, January 16). Used in ideation for the section on Tips for Using GenAI. Microsoft.
Dell’Acqua, F. (n.d.) Falling asleep at the wheel: Human/AI collaboration in a field experiment on HR recruiters. Falling+Asleep+at+the+Wheel+-+Fabrizio+DellAcqua.pdf (squarespace.com)
Dell’Acqua, F., McFowland III, E., Mollick, E., Lifshitz-Assaf, H., Kellogg, K.C., Rajendran, S., Krayer, L., Candelon, F., and Lakhani, K.R. (2023, September 15). Navigating the jagged technological frontier: Field experimental evidence of the effects of AI on knowledge worker productivity and quality. Harvard Business School Technology & Operations Mgt. Unit Working Paper No. 24-013, Available at SSRN: https://ssrn.com/abstract=4573321 or http://dx.doi.org/10.2139/ssrn.4573321
Futurepedia – Find The Best AI Tools & Software
Gaurav, A. (2023, June 3). 13 Best FREE AI Productivity Tools 2023 | 𝐀𝐈 𝐦𝐨𝐧𝐤𝐬.𝐢𝐨 (medium.com)
Gartner. Generative AI: What Is It, Tools, Models, Applications and Use Cases (gartner.com)
Google. Gemini. Gemini – Google DeepMind
Mollick, E. (2024, January 17). LinkedIn post on “falling asleep at the wheel.” Ethan Mollick on LinkedIn: A fundamental mistake I see people building AI information retrieval…
Potter, R. L. (2024). CRED: Reviewing GenAI output: Infographic.
University of Michigan (UMich). AI Tools | U-M Generative AI (umich.edu)
Wattanajantra, A. (2023, July 20). Generative AI in 7 easy steps: A practical business guide – Sage Advice Canada English
Yin, Z., Wang, H., Horio, K., Kawahara, D., and Sekine, S. (n.d.). Should we respect LLMs? A cross-lingual study on
the influence of prompt politeness on LLM performance.
2402.14531.pdf (arxiv.org) Via Ethan Mollick, LinkedIn, March 2024, Ethan Mollick on LinkedIn: Here’s an initial answer to the question I get asked most about prompting:… | 69 comments