The year 2023 is proving to be a fascinating one for generative AI tools, with ChatGPT, the latest chatbot from OpenAI, crossing the 100 million user line in January 2023, making it the fastest-growing consumer application in a short period of time (source: DemandSage). ChatGPT is a large language model that provides detailed answers to a wide range of questions. Ask it to summarise a report, structure a presentation or activities for your session and you may well be pleased with the results. ChatGPT’s ease of use, speed of response, and detailed answers have seen it quickly dominate the AI generator market and gain both widespread acclaim and criticism.

While early media attention focused on the negative, playing on sci-fi tropes and the out-of-control desires of AI tools, scientists such as Stephen Wolfram have been exploring and explaining the capabilities and intricacies of ChatGPT, expertly raising awareness of its underlying Large Language Model architecture and its limitations as a tool.

In his recent talk on the Turnitin Webinar ‘AI: friend or foe?’, Robin Crockett – Academic Integrity Lead here at UON, discussed how a better understanding of the ability of ChatGPT to create content can be used to deter cheating with AI tools. However, concerns have also been raised about students using the tool to cheat, claiming that minimal effort required to enter an essay question in ChatGPT may produce an essay that may be of an adequate standard to, at least, pass an assessment (source: The Guardian).

AI: friend or foe? (session)

Link to YouTube, Turnitin Session: AI friend or foe? 28/02/23 (CC Turnitin)

The next webinar in the Turnitin series entitled ‘Combating Contract Cheating on Campus’ with expert speakers Robin Crockett, Irene Glendinning and Sandie Dann, can be found here.  

One thing the media stories seem to agree upon is that generative AI tools have the potential to change the way we do things and challenge the status quo, with some traditional skill sets at risk of becoming replaced by AI, and new opportunities for those who embrace these technologies.

In terms of how AI might be utilised by academic staff, Lee Machado, Professor of Molecular Medicine, describes his use of AI tools in cancer classification and how AI tools might be used to help answer questions in his field. Lee also discusses how he feels AI tools such as ChatGPT could improve student experiences by providing personalised feedback on essays and by simplifying complex information.

Interview with Lee Machado
Click to view video: Interview with Lee Machado – Experiences of Generative AI (link opens in new tab)

In the following interview, Jane Mills, Senior Lecturer in Fashion and Textiles, discusses how the fashion industry is embracing AI, emphasising that rather than AI replacing creativity, it can be used to enhance creative work. Students Amalia Samoila and Donald Mubangizi reflect on the collaborative nature of working with AI, using examples of their current work.

Interview with Jane Mills
Click to view video: Creative use of AI in BA Fashion (link opens in new tab)

In his interview, Rob Howe, Head of Learning Technology, discusses the evolution of AI technology and its impact on the academic world. He explains that AI has come a long way since the original definitions by Minsky and Turing in the 1950s and that improvements in processing speed and access to data have made it a revolution in technology. Rob describes how AI technology has already been integrated into tools and academic systems in universities and how the rise of AI technology has led to a change in the way assignments are being considered, as students may now be using AI systems to assist in their studies. Although there is discussion around institutions wishing to ban the use of AI in academic work, Rob emphasizes the importance of learning to live with such tools and using them in a way that supports educators and students. AI systems have the potential to be a valuable resource for tutors to generate learning outcomes and offer new ideas which can then be critically evaluated and modified. 

Interview with Rob Howe
Click to view video: Interview with Rob Howe – Artificial Intelligence (link opens in new tab)

Exploring AI through multiple platforms and apps is a great way for users to get started. However, it’s important to note that not all AI tools are free. While many offer free tokens or limited availability for new users, some require payment. Our team member, inspired by the use of AI-generated images by an art student at UON, tried the IOS app, Dawn AI, which offered a 3-day trial. They enjoyed generating 48 new versions of themselves and even created versions of themselves as a warrior and video game character. 

However, it’s important to consider whether using AI in this way is simply a gimmick or if it has a more purposeful use. It’s easy to dismiss AI-generated images as mere novelties, but the potential applications of this technology are vast and varied. AI-generated images can be used in advertising, social media marketing, and even in the film industry. As AI continues to develop and evolve, we’re likely to see even more innovative and exciting uses for this technology. The possibilities are endless. 

The full extent of how AI tools will fit into daily academic life is yet to be determined. While some believe that AI has the potential to revolutionize the way in which we teach and learn, others remain skeptical about its ethical implications and its potential to negatively impact student engagement.

One of the main concerns is whether AI tools will prove to be a positive tool to enhance creativity and support students or whether they will provide a shortcut to assessments that undermine the learning process. It is clear that there are significant implications for how educators use AI tools in the classroom.

To explore these issues and more, Rob Howe, Head of Learning Technology at the University of Northampton (supported by University staff, external colleagues and the National Centre for A.I.), will be running a series of debates and talks on campus and online. These discussions will aim to assess the potential of AI tools and examine their ethical implications. Participants will discuss the challenges and opportunities presented by AI, and debate the best ways to incorporate these tools into the classroom.

The first of these debates, titled, ‘The computers are taking over…?’, is on March 15th. The full details can be found here; https://blogs.northampton.ac.uk/learntech/2023/01/30/the-computers-are-taking-over-debate/

Link to future Webinar from the series: https://www.turnitin.com/resources/webinars/turnitin-session-series-contract-cheating-2023 

Authors: Richard Byles and Kelly Lea.

Tagged with:
 

In celebration of International Women’s Day, I decided to use AI image generation to create some beautiful, photo-realistic portraits of women from around the world, in traditional dress.

Those of you who know me personally will know that I worked as a Freelance Graphic Designer for many years before becoming a Learning Technologist. Whilst I no longer work as a Graphic Designer, I do still keep my ear to the ground in the Graphic Design communities where there has been such a mixed reaction to AI image generation. It has been really interesting to watch the reactions over the last six months, as AI image generation has improved so much in such a short time. I, like many others, see it as an amazing and powerful tool that can work with a digital artist to produce pieces of work in a fraction of the time.

Limitations at the beginning

I used deepai.org, a free online text-to-image website that doesn’t require any login or registration, to play around and explore this new medium. For those simply wanting to type a keyword or two and see the result, it is really good fun. You could waste hours of your life just typing in different keywords and seeing what you get. It’s just so much fun creating weird and wonderful images! Here was one of my first attempts. I’d just asked for a sunflower, I wasn’t expecting a little Panda face peering out from the middle. I quickly learned what I’d done wrong and was determined to get better control over the results.

Sunflower illustration with panda eyes in the middle. AI Image Generation

Writing Prompts: There is a skill to it!

I went back to the Graphic Design community blogs and YouTube videos where I’d seen absolutely stunning results, with futuristic and surreal city-scapes and weird fantastical creatures. Most of the designers I saw are using a platform called Midjourney. Midjourney offers a free trial and then a monthly subscription of just $10 a month for their cheapest plan. These AI artists, and yes I will call them artists despite how controversial that is, are using and sharing specific prompts that, through trial and error, they have found work really effectively to achieve certain visual effects.

It is quite well accepted that AI can’t do hands and often can’t do faces particularly well. I often see lions with 6 legs. You end up with some fairly disturbing images sometimes. The image below was created when I asked for a scene with The Queen of England. You’ll see in this example what I mean when I say it can’t do faces. (The little furry, three-eared creature, with no eyes, was supposed to be Paddington 😞).

AI image generation. Image of the Queen with a blur where her face should be.

Harnessing the power of AI

Despite using a free AI image generator, which states very clearly on its homepage NOT to expect photo realism, I was amazed by these results. I was blown away by the quality of all the images that came out, and the ones I’ve omitted from my gallery below, I’ve only done so because they looked a bit too airbrushed.

The prompt I generally used went as follows; (X is the nationality)
Create a portrait of a traditional X woman, clear facial features, cinematic, 35mm lens, f/1.8, accent lighting, global illumination"

Why don’t you give it a go and try a different nationality? I’d love to hear how you got on.

A much longer version of this prompt was originally shared on Reddit and I took it from a YouTube video. You can watch if you want to understand more about what some of those elements are in the prompt — https://youtu.be/KXCVBu4btUk. (Photographers reading this will already have recognised some of those terms).

How AI image generation works

You may be looking at the images, wondering who these people are and whether they want AI using their faces. Well, you may be surprised to find out that none of these women are real people. They do not exist. You will not find these faces anywhere on the internet. Of course, coincidently, they might happen to look like someone in the world, but the faces, along with the rest of the image, are created by AI.

For example, if I want to paint a picture of a horse, I don’t have a horse to look at, so I’ll find a number of images on the internet to observe the proportions, the face shape, the mane, etc. I look at lots of different images from different angles to get a good idea of what it looks like. Then I’ll do my painting based on what I’ve observed. Similarly, AI will look at thousands of images on the Internet based on the specifics you’d put in your prompt. It then uses that information to create a brand new, original (Royalty Free*) image just for you. If you don’t like it, you can just tweak your prompt and it’ll make you a brand new, original image.

*Check the T&Cs of the platform you are using

Moving forward with AI

AI is here to stay whether we like it or not. I hope that we can appreciate it for what it can do for us, and embrace the technology. I am all for technology that can save us time and AI image generation certainly does that. Does it replace the artist? No, not necessarily. As you have seen in the examples, there is a skill, and you do have to learn how to get the best results. I look forward to seeing the images get better and better.

The new features in Blackboard’s March upgrade will be available from the morning of Friday 3rd March. This month’s upgrade includes the following new features to Ultra courses:

  • Prevent editing or deletion of discussion posts
  • Improved data and analytics in Ultra courses
  • Improved attempt switching when grading student submissions with multiple attempts

Prevent editing or deletion of discussion posts

The March upgrade includes an important enhancement to discussions in Ultra courses, which allows staff to prevent students from editing or deleting their discussion posts while the discussion is ongoing.

At present, staff can choose to lock an assessed discussion on the due date, but cannot prevent students from editing and deleting their own discussion posts prior to the due date. Following the March upgrade, staff will be able to select ‘Prevent editing’ when setting up an assessed or non-assessed discussion, which will make all published posts permanent.

• Discussion Setting with ‘Prevent editing’ selected

More information about setting up and using Ultra discussions is available at: Blackboard Help – Create Discussions

Improved engagement analytics in Ultra courses

Following the March upgrade, staff will be able to get a quick overview of their students’ engagement in their Ultra courses.

Under ‘Course Activity’ in the Analytics section of an Ultra course, staff will be able to see how much time students have spent in their Ultra course, along with the number of days since their last access. Both ‘Hours in Course’ and ‘Days of Inactivity’ will be sortable ascending and descending, and from this view staff will be able to select one or more students and bulk message them.

• View of ‘Course Activity’ panel following the March upgrade

Improved attempt switching when grading student submissions with multiple attempts

When students make multiple submissions to Blackboard assignments in Ultra courses, after the March upgrade it will be quicker and easier to navigate the submissions.

Rather then having to choose which submission to view, staff will immediately be presented with the most recent submission, and will be able to switch between submissions directly inside the submission viewer.

• Viewing a Blackboard assignment with multiple submissions

More information

As ever, please get in touch with your learning technologist if you would like any more information about the new features available in this month’s upgrade: https://libguides.northampton.ac.uk/learntech/staff/nile-help/who-is-my-learning-technologist

The use of 3D printing and scanning in the prosthetics and special effects industry is an emerging field with immense potential. In this video case study, technician demonstrator Paddy Costelloe from Games & Computing Software Technology discusses how he took Hair and Makeup students through the entire process of 3D scanning and printing to create prosthetics, from scanning to repairing the scanned data and using Cura software to create 3D prints. 

The students discuss how they were able to see the process firsthand, with one of them getting a scan of their face. The goal was to excite the students about the prospects of 3D printing and how it can be used to improve the industry. Students’ comments in the film include; that it was great to see the technology in action after reading about it in theory and that they found the breakdown of the technology easy to understand and could see how they would use it in their future work.

Poppy Twigger, Technician Demonstrator for Hair, Makeup and Prosthetics, highlights the potential of 3D printing in the industry, with 3D printed molds, makeups, and props becoming increasingly common. She emphasizes that while 3D printing would not replace practical effects, it is quickly becoming an integral part of the industry, allowing for a blended procedure that increases the quality and speed of the makeup process. The use of 3D printing in prosthetics and special effects is an exciting development that is set to transform the industry in the coming years.

View 3D printing and scanning workshop case study video – Link opens in new tab
Tagged with:
 

New Ultra Flexible Grading Interface: Technical preview & feedback opportunity

Blackboard are currently developing a new assessment grading interface for Ultra courses, and are looking for academic staff to test and provide feedback on the proposed new flexible grading interface over the coming 4 – 5 months.

The engagement will largely be self-paced, with staff working through various grading workflows (as they get built and added) in their own time, and providing feedback via a survey form. In terms of time commitment, it is envisioned that this will take no more than a couple of hours per month.

If you would like to get involved with this project, and to help shape the design of the new Ultra flexible grading interface, please sign up here: Flex Grading Tech Preview Sign up

Tagged with:
 

The new features in Blackboard’s February upgrade will be available from the morning of Friday 3rd February. This month’s upgrade includes the following new features to Ultra courses:

  • Polygon shape tool available when creating hotspot questions in Ultra tests
  • Sort items by grading status in the Ultra gradebook
  • Students can see other members of their group in Ultra courses
  • Ally alternative format views count towards progress in progress tracking

Polygon shape tool available when creating hotspot questions in Ultra tests

Since the November upgrade, staff have been able to create hotspot questions in Ultra tests. Initially, the hotspot area could only be rectangular, but following the February upgrade staff will be able to define complex hotspot areas in Ultra tests using the polygon shape tool.

• Defining a complex hotspot area using the polygon tool

More information about how to add and use hotspot questions is available at:

Sort items by grading status in the Ultra gradebook

Following the February upgrade, when viewing the gradebook in list view, staff can sort the gradebook by the grading status.

• Sorting the gradebook by grading status

Students can see other members of their group in Ultra courses

After the February upgrade, students who have been assigned to groups will be more easily able to see who they are in a group with. However, they will not be able to see any information about groups that they are not a member of, nor will they be able to view detailed information about their other group members. All that will be disclosed when viewing other group members will be their name, their role in the course, and their profile image if they have uploaded one.

• A view of a student viewing the other members of one of the groups of which they are a member

Ally alternative format views count towards progress in progress tracking

When documents are uploaded into NILE they are automatically made available in various additional accessible formats by Ally. Following the February upgrade, when students download and view one of Ally’s accessible versions of a document, this will be tracked by Ultra’s progress tracking tool.

You can find out more about Ally at:

More information

As ever, please get in touch with your learning technologist if you would like any more information about the new features available in this month’s upgrade: https://libguides.northampton.ac.uk/learntech/staff/nile-help/who-is-my-learning-technologist

Image showing the main title - So, here's the thing

You may have seen films where artificially intelligent devices either help or hinder humans – but where is it all going?

Artificial Intelligence will increasingly form part of our daily lives but what actually is it and what might be the benefits and challenges for us in our future home, study and work environments?

The debate, moderated by the UK’s National Centre for Artificial Intelligence, will raise questions for all of us about the way in which this technology will impact our lives.

This session is aimed at everyone, regardless of their background and level of expertise.

Hear both sides of the argument and vote at the end!

This event is taking place in person and online.

In person details and registration:
https://AIdebateinperson.eventbrite.com

Online details and registration:
https://AIdebateonline.eventbrite.com

Tagged with:
 

Simulation has become an integral part of teaching and learning pedagogy within the Health Faculty at UON.  In order to operationalise the faculty’s simulation strategy, a lead for simulation was recruited in September 2022. Roshini Khatri, Head of Health Professions, explains why:

“Simulation is used regularly in Healthcare as a learning and teaching technique to create situations or environments to allow persons/students to experience a demonstration of real-life scenarios for the purpose of practice and learning in a safe and nurturing environment. We have chosen to embed simulation as part of our curriculum to ensure that we are using contemporary and innovative activities to support the healthcare professionals of the future.”

The new Academic Lead for Simulation, Kate Ewing, is passionate about how the use of simulation and virtual reality scenarios can create more immersive, engaging, and more productive learning outcomes. Kate explains that simulation ranges from a technical skill, for example, where students might be learning to catheterise, to a more immersive scenario which is designed to give students the opportunity, in a safe and controlled environment, to experience a clinical situation.

(Click the image or text link below to launch the video in a new tab)

Image: Academic Lead for Simulation, Kate Ewing.
Video: Interview with Kate Ewing and Hannah Cannon

The challenge Kate has, is scenarios need to be written and carefully constructed with learning outcomes at the forefront. Kate admits this can create a ‘heavy workload’ for academic staff, so her aim is to see how these scenarios can be utilised across programmes so they fulfil a range of learning outcomes; allowing students to work inter-professionally with each other, rather than having a single use.

Kate is aware of the limitations of this technology when utilised by small groups or individuals at a time. However, Kate’s strategy provides a “shift in thinking” which encompasses all of the students in the learning process. In her learning situations, the students become observers of the process taking place. Kate comments: “Evidence says, if you debrief the situation and the learning in the right way, observers gain as much as the participants. The simulation is seen as an excuse for a debrief – the debrief is where the rich learning takes place”.

Image: Hannah Cannon carries out a debrief session with her nursing students.

Kate highlights how simulations in a learning environment are very different to clinical settings. “Students aren’t just testing out their skills, as they might do in a clinical environment, rather, they are developing skills of communication and those human factors which require a more intricate debrief strategy”. Although Virtual Reality (VR) scenarios contain published debriefs, Kate feels strongly these need customising so they can be mapped closely to the learning outcomes of individual programmes at UON.

Hannah Cannon, Practice Lead for Nursing Associates, has been working with students on their clinical skills using a VR platform in a whole class situation. Hannah said that in her experience, students seem to feel “really safe” when engaging with scenarios in this way. Her students can play through a scenario safely without the need to worry about the consequences of their decisionmaking. Hannah believes by having a group of students present, it allows for a more productive and wider discussion about patient care. As a team, the students can work together to make decisions about the patient’s treatment.

Image: Hannah Cannon Senior Lecturer in Practice Development -Nursing

Hannah emphasises how the use of a debrief, which is usually twice as long as the scenario, permits her students to have more effective discussions about the scenario and to reflect on their decision-making. Hannah feels this style of immersive learning allows her students to better grasp their learning outcomes by having the opportunities to see their decision-making play out and then by reflecting on the end result.  She goes on to say that it, “heightens student’s self-awareness both professionally and personally,” which she feels is fundamental in nursing care.

Although immersive and engaging, Kate understands simulations are not an “easy answer” to fulfilling learning outcomes at UON, due to their time-consuming nature. But Kate is passionate that, through her role, she will be able to help create a strategy that supports staff to not only fulfill learning outcomes in a more productive way within their own programme, but to enable collaborative scenario-based learning to be adopted across programmes in a more cohesive and versatile way.

Tagged with:
 

Would you like to meet with members of Blackboard’s Product Management Team, and to have some input into the development of the groups tools that are available in Ultra courses?

If so, Blackboard are hosting an online Ultra groups tools focus group specifically for staff at the University of Northampton at 3pm on Monday the 23rd of January.

If you would like to attend, please email Robert Farmer: robert.farmer@northampton.ac.uk

Tagged with:
 

Virtual reality (VR) and augmented reality (AR) are quickly gaining traction at the University as effective technologies for teaching and learning.

With VR in particular, it is tempting to imagine a world in which students fit their headsets for a complete University experience. However, as these four films show the reality is more interesting at UON with academics exploring how these technologies can be used in a wide range of creative ways in the classroom.

The first film explores how in 2022-23 Senior Lecturer in Marketing Kardi Somerfield worked with the Police Fire and Commissioners office on a live VR brief to create a virtual reality scenario to promote safer student experiences in Northampton.

(Click the images or text links below to launch the videos in a new tab)

Digital Marketing students demonstrating Safer Northampton VR project at UON open day.
Image: Digital Marketing students demonstrate Safer Northampton VR project at UON open day.
Video: Interview with Kardi Somerfield – Safer Northampton VR Project Duration: 3.55

Using a RiVR Link system the Second-year Digital Marketing students conceptualised, planned, recorded, edited, and released their unique VR experience with accompanying branding including flyers, T-shirts, and signage.

The project was launched to widespread acclaim and media attention, providing the students with highly valuable digital skills. It is an exemplar of how VR technologies can be used creatively as content creation tools in subject areas that are non-technical.

The second film looks at how the subject area of Health is using VR and XR technologies. Sims Lead Kate Ewing and Nursing Senior Lecturer Hannah Cannon discuss how VR software is helping to develop nursing students’ critical skills through the use of interactive hospital scenarios. The simulation tools are used in combination with debriefing discussions that provide rich learning experiences matching the tutor’s individual learning outcomes.

Senior Lecturer in Nursing Hannah Cannon demonstrates the use of XR technologies
Image: Senior Lecturer in Nursing Hannah Cannon demonstrates the use of XR technologies.
Video: Interview with Kate Ewing and Hannah Cannon on Developing Health Simulations. Duration: 7.41

Our next video case study focuses on how Senior Lecturer in Games Design David Nicholls works with his students to create prototype VR games. He discusses how VR and XR are areas that are both very popular with students and have huge potential for their future employability. So much so, that he is currently collaborating on the development of VR projects across multiple subject areas within the University.

VR development and experiences in Games Design
Image: VR development and experiences in Games Design
Video: Interview with David Nicholls on use of VR in Games Design. Duration: 2.17

And finally, the last film in this series looks at how the University’s CENTRE FOR ACTIVE DIGITAL EDUCATION (CADE) has launched a number of Special Interests Groups (SIGs) for new technologies such as XR (including MR, VR and AR), Artificial Intelligence (AI), Distance Learning (DL) and Game-Based Learning (GBL).

Head of Learning Rob Howe discusses the strategic approach to VR at UON.
Image: Head of Learning Rob Howe discusses the strategic approach to VR at UON.
Video: Interview with Rob Howe on CADE and VR special interest groups. Duration 2.42

Head of Learning Technology Rob Howe explains that CADE is a platform for creative discussion on the teaching and learning opportunities afforded by these extended reality technologies, and how by taking a strategic approach to the development of XR and working collaboratively more students and staff will benefit from our expanding expertise in this area.

For more details on the CENTRE FOR ACTIVE DIGITAL EDUCATION (CADE) and to join the discussions please see: https://www.northampton.ac.uk/research/research-institutes-and-centres/centre-for-active-digital-education-cade/

Tagged with: