I wanted to compare a couple of AI tools. The question of which is best is often asked and I'm fairly sure that this changes every day or week as each tries to be better than the other.
Those I tried are:
https://copilot.microsoft.com/
Executive Summary
In all cases, the results were disappointing, however they could all reduce a lot of the initial drudgery, leaving only a clean up and checking process.
- Claude - provided the most complete list but had the least use on the free tier. [Stingy]
- ChatGPT - gave more opportunity on the free tier to attempt to enhance the results. [Lazy]
- Copilot - strict on copyright, so limited itself. [Lazy and a jobsworth]
- Gemini - the only one to have good links to appropriate images [Needs pushing]
For the short duration of this trial, I preferred the Claude user interface, but I did not spend much time testing this on any of the tools.
The Experiment
I started this as an experiment for work however, I wanted to use a completely non-work related question to avoid any risk of data leakage in to public AI tools. For that reason I picked a current hobby topic, which is why the detailed results are on my personal blog.
The question I asked was:
"create a list of WH40K units from the imperial guard with their bases sizes and a link to an image of the figures"
The question deliberately used an abbreviation, the colloquial name for the faction and a grammatical error, "bases" instead of "base".
Once those results were displayed, I used a follow-up question to add more information to the list.
"please add the number of models in a unit and their points value to the list"
Claude (Anthropic)
Initial Results
- The AI fully understood the context of the question.
- The data is useful and formatted in an easy to read way
- Included extra notes at the end.
- Included the process it was using in a separate window with links to the source.
- None of the links to images worked. They all came up page not found!
Follow-Up Results
What was good:
- The data is useful.
User Interface:
Source and explanation on the left and results on the right. This provided a lot of information and was easy to navigate.
Export
Once I had the results how easy was it to export those results.
- Download as Markdown - text file that could potentially be used the export the data
- Download as PDF - image file using print to PDF. The web links did not work in the output!
- Publish - this made the result public, with a link to the results. I did not do this.
Claude ran out of free processing, so I would have to wait until 5pm to be able to do anything else.
ChatGPT (OpenAI)
Initial Results
These came back much quicker than Claude but the list was incomplete. Much shorter than Claude's list.
What was good:
- The AI fully understood the context of the question.
- The data is useful and formatted in an easy to read way
- Included extra notes at the end.
What was poor:
- This links were not to images of the models.
- The list was incomplete.
Follow-Up Results
What was good:
- The data is useful.
Export
- Export required an additional chat. It turned some of the table into a spreadsheet, however, for no obvious reason, it did not include all of the results! The spreadsheet list was even shorter than the incomplete web page list! The output was an xlsx spreadsheet not just a CSV, so that was a bonus.
- There was a share option, but this made it public and had a link.
Extra
As I could keep going with ChatGPT, I tried another improvement:
"please change the links in the list to images of the models used in the units"
It resulted in a separate list of what appeared to be links to image files, but only one of the links was to a useful image.
"try another site for those images"
It came back with the same site, but links that did not work.
"please try a completely different location to get those images"
It did not add the images to the list but asked me questions.
"Please include all Astra Militarium units and open the image within the spreadsheet. Please create the spreadsheet for me to download"
Then more questions.
"Option A, no Forge World units"
The result was a much longer list than I started with, the data was useful but the links to images were still incorrect.
"Recreate the spreadsheet but use the URL https://wh40k.lexicanum.com/wiki/Portal:Miniatures as the base of where to get images from"
After all that, it still failed to get usable images. It selected the wrong links from that site!
Copilot (Microsoft)
Initial Results
Copilot built the list on screen as it went, so initial results showed up very quickly but there was await for the list to finish.
The list was short, but there was a prompt to expand the list to include everything.
What was good:
- The AI fully understood the context of the question.
- A few of the links were to meaningful images.
- There were some additional notes.
What was poor:
- The data was formatted into summary groups so did not provide the level of detail I expected.
- The list was incomplete.
Follow-Up Results
Copilot was a little strict on copyright, even though the information has been released for free for personal use, Copilot explained that it could not republish that information.
What was poor:
- It did not include all the data I expected due to copyright.
User Interface:
A single chat window. Some notes were included inline.
Export
- There was a share button.
- I had to use a prompt to get a spreadsheet, which resulted in a rather short CSV file.
Extra
I tried a few more prompts but the resulting spreadsheet was always too short and refused to add the points values in without me manually collating them.
Gemini (Google)
This is included in my Google subscription, so I automatically get the Pro version, not the free trial.
Initial Results
What was good:
- The AI fully understood the context of the question.
- The data is useful and formatted in an easy to read way
- Included extra notes at the end.
- Many of the links to images were to appropriate pages.
What was poor:
- The list was incomplete although the notes said common units to justify that.
Follow-Up Results
What was good:
- The data was useful.
User Interface:
A single chat window. Some notes were included inline.
Export
- No obvious share button.
- Used a prompt to get a spreadsheet.
"please put that data into a spreadsheet that I can download"
It did not create the spreadsheet, but gave a text list that could be easily copied to notepad to save as a csv file! I don't know why it used that longhand method.
Extra
"please add all the astra militarum units from the Munitorum Field Manual in to that list ready to download"
That produced the best results so far, with a more complete list and several of the image links were usable.
Conclusion
The results of the various products were not identical and based on each other, some of them were incomplete. I am sure any of them would save some time, but if it was necessary to have accuracy, the source data would need to be checked manually against the AI results. Not ideal!
This is only one test, but it is similar to the sort of things that I am likely to do when researching any subject. My thoughts are that it could save some time, but it is disappointing.
==

































