X-Wing with Snapmaker U1

Screenshot 2026-01-24 100857

After getting my new Snapmaker U1 and printing my first item I was looking for another project to print and found this:

https://galacticarmory.net/collections/3d-files/products/x-wing-vehicle-kit-card-3d-print-files

a cool Star Wars X-Wing fighter. You can see the result above.

The model was great but when I pulled it into the Snapmaker Orca slicer it wasn’t set up with the right colours. This meant that my first attempt was all in white. Once I had worked out the colour mapping I ended up with the desired result but I am still not happy with the process and need to spend more time with the Snapmaker Orca slicer to understand how it work and to make remapping colours easier.

However, I am super happy with the result and am now looking for my next project.

Using Azure AI on a web cam image

blog1

So far I’ve connected an Arducam 3MP camera to an ESP32 controller and uploaded regular photos to Azure and displayed them on a web page. Those details are here:

https://blog.ciaopslabs.com/2026/01/18/arducam-as-a-live-web-cam/

Now what I want to do is feed those images into Azure AI and display the results back on a web page.

The first step in this process will be to create a Computer Vision Service in Azure.

  1. In Azure Portal, click “+ Create a resource” (top left)
  2. Search for Computer Vision
    • Type: “Computer Vision”
    • Click on “Computer Vision” (with the eye icon)
    • Make sure it says “Computer Vision” not “Azure AI services”
    • Click “Create”
  3. Configure the Resource
    • Subscription: Your Azure subscription
    • Resource Group: Select “image-analysis-rg” (the one you created earlier)
    • Region: CRITICAL – Must be one of these:
      • East US
      • West US
      • France Central
      • North Europe
      • West Europe
      • Southeast Asia
      • East Asia
      • Korea Central
        Choose East US if unsure (best compatibility with Vision Studio)
    • Name: Give it a unique name
      • Example: “my-vision-service-2026”
      • Lowercase, numbers, hyphens only
    • Pricing tier: “Free F0”
      • Gives you 5,000 API calls per month
      • 20 calls per minute
      • Completely FREE!
  4. Create the Resource
    • Check the “Responsible AI Notice” checkbox
    • Click “Review + create”
    • Click “Create”
    • Wait 1-2 minutes
    • Click “Go to resource”

With this create you now need to get yoru API credentials for the AI service.

  1. In your Computer Vision resource, look at the left menu
  2. Click “Keys and Endpoint” (under “Resource Management”)
  3. Copy Your Credentials
    • Click “Show Keys”
    • Click the copy icon next to KEY 1
    • Paste into Notepad and label it: API Key: [your-key]
  4. Copy Your Endpoint

Keep these values safe – you’ll need them in the next step!

Next, you’ll need to create a HTML webpage to connect and display the results. You’ll find that here:

https://github.com/directorcia/Azure/blob/master/Iot/Arducam/3MP/vision-index.html

Make sure you update these values in yoru file to match your service:

const VISION_ENDPOINT = ‘https://<YOUR VISION SERVICE NAME>.cognitiveservices.azure.com/’;

const VISION_KEY = ‘<YOUR VISION SERVICE KEY>’;

const IMAGE_URL = ‘<YOUR BLOB STORAGE ACCOUNT>.blob.core.windows.net/images/latest.jpg’;

Note: this is not a secure way to implement the service if you are going to expose the page to the Internet. This should therefore be addressed if you plan to make the page public.

Once you have the new HTML file upload it to the $web container of your Blob storage account via:

  1. Go back to Azure Portal
  2. Navigate to your Storage Account
    • Search for your storage account name in the top search bar
    • Click on it
  3. Open Storage Browser
    • In the left menu, click “Storage browser”
    • Expand “Blob containers”
    • Click on “$web” (this is the special container for static websites)
  4. Upload your HTML file
    • Click “Upload” button at the top
    • Click “Browse for files”
    • Select your “index.html” file
    • Important: Check the box “Overwrite if files already exist”
    • Click “Upload”
    • Wait for upload to complete (shows green checkmark)

Now you should be able to view the website by:

  1. Open your browser
  2. Go to your website URL (from Step 2)
  3. What you should see:
    • Your image displayed
    • “Analyzing image…” message appears
    • After 2-3 seconds:
      • AI-generated caption (e.g., “a person standing on a beach”)
      • Confidence score (e.g., “Confidence: 87.5%”)
      • Tags with percentages (e.g., “outdoor 99%”, “beach 95%”)
    • Australian date/time format timestamp
    • Every 60 seconds, the image auto-refreshes

Screenshot 2026-01-18 121901

Troubleshooting Common Issues

Issue 1: “Access Denied” or 401 Error

Cause: API key is incorrect or endpoint is wrong

Solution:

  1. Go back to Azure Portal
  2. Navigate to your Computer Vision resource
  3. Click “Keys and Endpoint”
  4. Click “Show Keys”
  5. Verify you copied KEY 1 correctly (no extra spaces)
  6. Check the endpoint URL matches exactly
  7. Re-edit your index.html file with correct values
  8. Re-upload to $web container

Issue 2: Image Doesn’t Show

Cause: Image URL is wrong or image container is not public

Solution:

  1. Go to Storage Account → Containers → “images”
  2. Click on your image file
  3. Copy the URL and verify it’s correct in your HTML
  4. Click on the “images” container name
  5. Click “Change access level”
  6. Select “Blob (anonymous read access for blobs only)”
  7. Click OK

Issue 3: CORS Error in Browser Console

Cause: Cross-Origin Resource Sharing not configured

Solution:

  1. Go to your Computer Vision resource
  2. Find “CORS” in the left menu (under API or Settings)
  3. Add a new allowed origin:
  4. Click Save
  5. Refresh your webpage

Issue 4: “Analysis never completes” or Stuck on “Analyzing…”

Cause: Usually a JavaScript error or network issue

Solution:

  1. Press F12 to open browser Developer Tools
  2. Click “Console” tab
  3. Look for red error messages
  4. Common fixes:
    • Check all three values (ENDPOINT, KEY, IMAGE_URL) are correct
    • Ensure no typos in the JavaScript section
    • Try a different browser
  5. Verify your Computer Vision resource is in a supported region (see Step 4)

Issue 5: Wrong Region Error

Cause: Computer Vision resource created in unsupported region

Solution:

  1. Delete the Computer Vision resource
  2. Create a new one in a supported region:
    • East US, West US, France Central, North Europe, West Europe, Southeast Asia, East Asia, or Korea Central
  3. Get the new KEY and ENDPOINT
  4. Update your HTML file
  5. Re-upload

Understanding Costs (Important!)

Free Tier Limits

  • Computer Vision F0: 5,000 API calls per month – FREE
  • Storage Account: First 5GB storage – FREE
  • Bandwidth: First 15GB outbound – FREE

How This Setup Saves Money

Image Refresh: Happens every 60 seconds (FREE – just downloading an image)

AI Analysis: Only happens:

  • When you first load the page (1 call)
  • When you manually refresh your browser (1 call per refresh)

NOT when: The image auto-refreshes every 60 seconds

Example Usage:

  • You check the page 5 times per day = 5 API calls/day
  • Over 30 days = 150 API calls/month
  • Well within 5,000 free limit!

This optimized version:

  • Stays FREE even if left open 24/7 ✅

The next step will be attempt to have the Ai count and display the number of vehicles it sees in each shot.

Arducam as a live web cam

blog1

I recently managed to Send images from Arducam to Azure, which was a major win. Only challenge with that is it isn’t easy to see those images inside Azure. Thus, I wanted an easy way to do this and figured that displaying on a web page was the go.

Turns out, this isn’t to hard using Azure. Thus, I started by modifying the code for the controller so it would always upload an image using the same name and do so every 60 seconds. You can find the code for the controller here:

https://github.com/directorcia/Azure/blob/master/Iot/Arducam/3MP/storage-web.cpp

and documentation here:

https://github.com/directorcia/Azure/blob/master/Iot/Arducam/3MP/storage-web.md

Next, I created an index.html file to display the image from Azure Blob storage. You can find a copy of that here:

https://github.com/directorcia/Azure/blob/master/Iot/Arducam/3MP/web-index.html

you need to rename it to index.html and put the name of your Blob storage URL on line 40.

Next, you’ll need to enable a static website on your Blob storage account.

image

  1. Go to Azure Portal (portal.azure.com)
  2. Navigate to your storage account: arducamimages
  3. In the left menu, find “Static website” (under “Data management” or “Settings”)
  4. Click “Enabled”
  5. Set Index document name: index.html
  6. Note the Primary endpoint URL (e.g., https://arducamimages.z88.web.core.windows.net/)
  7. Click “Save”

image

Then you’ll need to upload your index.html to the $web container in your Blob storage account.

  1. In your storage account, click “Containers” (under “Data storage”)
  2. Click on the $web container (created automatically when you enabled static website)
  3. Click “Upload”
  4. Select your index.html file
  5. Check “Overwrite if files already exist”
  6. Click “Upload”

image

Now you’ll need to set public access for $web container.

  1. While in the $web container, click “Change access level”
  2. Set “Public access level” to “Blob (anonymous read access for blobs only)”
  3. Click “OK”

image

Next, set public access for images container

  1. Go back to “Containers”
  2. Click on the “images” container (where your latest.jpg is stored)
  3. Click “Change access level”
  4. Set “Public access level” to “Blob (anonymous read access for blobs only)”
  5. Click “OK”

image

You should be able to view the website using the Primary endpoint, that was back in the Static website settings as shown above. In my case I also need to add /index.html to end of the URL to get it to display.

image

You can see my result above. The page should refresh every 60 seconds automatically with a new image. You should also be able to now view this image from anywhere by simply browsing to your URL.

My next plan is to try and integrate Azure AI vision, given the image is already now in Azure, and for the page to report the weather i.e. sunny, wet, etc. yes, I know you can see that in the image but that makes it an easy way to very that Ai reading the image correctly. Let’s see how hard that is to do next.

I have my Snapmaker U1

blog

My new Snapmaker U1 recently arrived. This allows multi colour printing of up to 4 different colours at once. You can see the test figure i was able to make.

The printer is high quality and easy to assemble.

image

Device Calibration (1/3)

Homing Anomaly

Error Code: 0002-0528-0000-0011

Check if timing belts are tensioned properly, X

and Y axes move smoothly, and there are no

obstructions at the X and Y homing positions.

Retry after troubleshooting. If issue persists,

contact technical support.

I did get the above error when initially setting it up but the following help article:

https://wiki.snapmaker.com/en/snapmaker_u1/troubleshooting/U1_homing_failure

helped. I basically loosened the print head, moved it through its range or travels a few times, tightened teh screws back up a little less tightly and it all worked.

There are plenty of review out there on YouTube if you want to go and take a look but so far everything I have printed has worked flawlessly and I am very happy. Given that the printer has an open top, where an eventual cover that I have also bought, it is louder than I expected. The cover should solve the problem when it arrive but it is nothing I can’t deal with in the mean time.

So far so good then with the U1 and I’ll report back my progress as I grow mor familiar.

My stuff 2026

blog

Over on the CIAOPS Blog I do a number of annual posts on a range of items I use in my business. I thought therefore I should start doing one here. So here goes:

Snapmaker Artisan – My 3D printer of choice. I have had this for a few years and use it to create everything I need. Love the quality of the device as well as the results that I get. I can do laser cutting and CNC as well if you change the print head but for me, most of time is spent printing.

Snap Make U1 – After supporting this on Kickstarter earlier this year I have only just received my unit this week. I’ll be posting more about this printer once I have it all working. In essence, where it differs from the Snapmaker Artisan is that the U1 allows you to print four (4) different colours without having to change filament. Looking forward to what I can get from this.

Visual Studio code – my software development environment. Free from Microsoft. I use this to managed and develop all the code for my IoT projects.

PlatformIO – I use this extension in my coding environment to actually managed my IoT projects. It allows me to select the right controller board, manage the driver libraries as well as upload the code to the actual controller boards. A must. Many other use the Arduno IDE and even though PlatformIO does take a little to get used to, for me it is the way to go and allows to to develop and test things easily.

Github Copilot – starts with a free version but I’m using the Pro version for $10 per month and would recommend that as it just makes life so easy. Code is the real secret to getting IoT projects working and I am not a developer and my C programming is pretty rusty so the number of times AI has allowed to create what I want is amazing. It also deals with compile and syntax errors, missing driivers, commenting code and so much more. I’d still be battling away with the basics if I wasn’t using this and for free so should you!

Github – where I publish all my code and documentation for my projects. Hopefully what I create can help others as they have helped me. Also a great place to file projects for the time you need to go back and find out how do did something. Again, there is no cost to get started using GitHub.

Core Electronics – My primary source for components. Great range, easy purchasing and quick delivery. Highly recommended.

Little Bird Electronics – My backup source for components. Again, great range, easy purchasing and quick delivery. Highly recommended.

Acebott controllers – My current choice when it comes to controller boards and projects. Their stuff is the way I shoudl have started my IoT journey.

Keyestudio – Another great controller and kit seller I use regularly.

Amazon – Always a great source for anything I need, whether controller or sensor boards, tools, etc. Easy ordering and quick delivery. This is where I got my Robot Arm from.

That’s probably enough to give you an idea of the main things I use in the lab. Hopefully, it you can take a look at these if you have any interest and let me know if you have any questions on anything here.

Send images from Arducam to Azure

blog

Now that I have my Arducam working, the next step is to be able to upload the images from the camera to Azure Blob storage. To do this, you’ll need to set up an Azure subscription and follow these steps to actually create an Azure Storage Account:

https://github.com/directorcia/Azure/blob/master/Iot/Arducam/3MP/azure-storage.md

It is also recommended that you place all you sensitive information (WiFi password, Azure information, etc) in an io_config.h file to separate it from the main code.

With the Azure Blob storage configured next you’ll need to hook up your Arducam to your controller. This time I’ve gone for a Acebott ESP32-Max-V1.0 because it has inbuilt Wifi. Thus, I have wired the following ports:

VSPI (recommended):

  • MOSI (GPIO 23): Top right area, blue header row
  • MISO (GPIO 19): Top right area, blue header row
  • SCK (GPIO 18): Top right area, blue header row
  • CS (GPIO 5): Top left area, blue header row (you can use any available GPIO for CS) orange

Acebott ESP32-Max-V1.0 pinout

image

Arducam Pinout

image

I then uploaded teh following code to the Acebott board:

https://github.com/directorcia/Azure/blob/master/Iot/Arducam/3MP/capture-image-azure.cpp

and the documentation for this is here:

https://github.com/directorcia/Azure/blob/master/Iot/Arducam/3MP/capture-image-azure.md

but in essence after the board has booted the serial interface will show:

image

If you select one of the upload options you should see something like:

image

then if you look inside the Azure Blob storage container you should see the file like so:

image

This should make it easy to store many images from the camera without having to use the serial port to view and download them

Arducam success! Finally

blog

If you have been following along here for a while you’ll know that I have had constant failures trying to get an Arducam Mega 3MP working with my IoT projects. The last attempt was:

https://blog.ciaopslabs.com/2025/07/13/arducam-mega-3mp-failed-attempt/

After getting my robot car working with a PS3 controller I was working towards getting the PS3 controller also working with my robot arm. At the moment the robot arm is connected to a Keyestudio KS0172 with a Keyestudio Sensor Shield/Expansion Board V5 for Arduino Leonardo attached. Unfortunately, the Keyestudio KS0172 lacks both Bluetooth and Wifi but I noticed the Keyestudio Sensor Shield/Expansion Board V5 for Arduino Leonardo actually has a dedicated SPI port like so:

image

Ah ha. I wonder if I can get that working with the Arducam? Spoiler alert, yes I can.

I have now come to realise probably the two biggest mistakes I have made with the Arducam Mega 3MP:

1. I thought it was a ‘streaming’ style camera. No it’s designed really just to take pictures

2. I need something to ‘read/download’ the images from the camera to actually see those images

With the camera connected to the Keyestudio Sensor Shield/Expansion Board V5 for Arduino Leonardo SPI port. As a reminder the camera connections are:

image

I used this piece of code on the Keyestudio KS0172:

https://github.com/directorcia/Azure/blob/master/Iot/Arducam/3MP/capture-image.cpp

to connect to the camera and allow a photo to be taken and stream it down the serial port on request. The documentation for this code is here:

https://github.com/directorcia/Azure/blob/master/Iot/Arducam/3MP/capture-image.md

I then had to write some Python code to actually initiate a photo being taken and extract the image from the camera over the USB/serial port and put it into a subdirectory on my machine. That code is here:

https://github.com/directorcia/Azure/blob/master/Iot/Arducam/3MP/capture-image.py

and the documentation for that is here:

https://github.com/directorcia/Azure/blob/master/Iot/Arducam/3MP/capture-image-py.md

and to execute this Python script I also needed to install Python on my machine, which is pretty easy in Visual Studio code by just adding the Python extension.

With all that in place and after a bit of back and forth to get the image to download correctly via the serial port I was indeed able to confirm that my Arducam Mega 3MP  is working properly and I can now use it to take photos.

Phew. That took a long time and a lot of effort. I think my major oversights, listed above, really held me back here along with the usual physical connection challenges. Now, I have a much better understanding of what the camera can and can’t do and what I need to actually see an image and most importantly the Arducam Mega 3MP is finally actually working!

ACEBOTT Smart car – Bringing it all together

blog

It is now time to bring all the pieces together on the Acebott Smart Car and make it a movable platform that can stream live video.

Screenshot 2026-01-04 101028

Screenshot 2026-01-04 101245

I’ve taken the standard ACEBOTT ESP32 Smart Car Starter Kit with Mecanum Wheels and added the ACEBOTT Bluetooth Controller Expansion for QD001 (QD010) to control its movement. I have also added the ACEBOTT ESP32 Camera Expansion pack for Smart Car (QD002) to give the car vision.

You can see that I have kept the ultrasonic sensor from QD001 and simply mounted the camera (QD002) on top to facilitate pan left and right. I could have added an additional servo to control this independently of the ultrasonic sensor, however in the end I decided that it was easier simply to print a 3D mount so the camera unit could sit above the ultrasonic senor and take advantage of the pan left and right servo already in place. I could refine the design with a separate 3D printed mount for the camera unit if desired, but for the sake of getting things working I’ve decide to stay with thsi method.

I have detailed how to get the PS3 controller (QD010) working with the robot car (QD001) here –

https://blog.ciaopslabs.com/2025/12/28/connecting-a-joystick-controller-to-an-acebott-esp32-smart-car/

and I have covered off getting the camera (QD002) working stand alone here:

https://blog.ciaopslabs.com/2025/12/31/connecting-a-webcam-to-an-acebott-esp32-smart-car/

You’ll find the code and documentation in those articles. At a minimum you’ll need to program the camera (QD002) to support the creation of a web server so it can stream the video to a device.

To mount a device with a screen (an old iPhone) to the PS3 controller (QD010) I found this:

Universal smartphone mount for DUALSHOCK 3 (PS3 controller)

that I could 3D print. I did need to slight extend the width of the base to suit my controller but it worked a treat.

Screenshot 2026-01-04 103123

The above version of the holder was my first printing attempt where I broke the lower part of the base holder when attempting to fit on the controller. This lead to me slightly lengthening the model the second time around that fixed the issue. The initial broken model is secured here using some rubber bands but the re done version fits perfectly.

With the code loaded into the robot car (QD001) and the camera (QD002) as well as having the PS3 controller (QD010) connected the end result looks like:

Connecting a webcam to an ACEBOTT ESP32 Smart Car

blog

With a PS3 style controller connected to an Acebott ESP32 Smart Car my next task was getting the add QD002 ACEBOTT ESP32 Camera Expansion pack for Smart Car working.

I had previously tried to get an Arducam Mega 3MP working and failed miserably, but was highly motivated to overcome that setback with a purpose built camera add on in the Acebott QD002.

Things did not get off to a great start because the connection process required the camera to be connected to the UART port of the driver board.

Screenshot 2025-12-31 222556

The problem with that is the UART port conflicts with the serial port for uploads and monitoring. This mean hat I needed to disconnect the camera UART connection every time I wanted to update my code and then with it reconnected there was no real way to monitor the result. I either need to go to great lengths to program up and connected a different UART on the board or find another solution.

The easiest solution was to simply upload the code on the ESP32 camera to enable a web server to stream the code directly from the camera board. You’ll find that code here:

https://github.com/directorcia/Azure/blob/master/Iot/Acebott/Smartcar/QD002/ACEBOTT%20QD002%20Camera%20Car%20V3.8/webcam.cpp

and the documentation for it here:

https://github.com/directorcia/Azure/blob/master/Iot/Acebott/Smartcar/QD002/ACEBOTT%20QD002%20Camera%20Car%20V3.8/webcam.md

Thus, the camera board will boot, connect to WiFi, run a web server, report that IP address to the serial console of the web camera board and then stream the camera video there.

I cannot tell you how satisfying it was to finally seeing streamed images on my screen. It had taken a long time to to get here but now, finally, I was ready to finish assembly of the car and mount camera onto it!