MidnightReign.Org

So much cringe...

If you have a Tello drone - actually this is probably the case for any device that you need to connect to via wifi in order to configure/control from iOS / iPadOS - if you have issues with the app just not detecting the drone even if you're connected to the device's wifi connection, try the following:

1) In the Settings app, scroll down to the particular app settings and check that it has local lan access.

2) Forget your home wifi network so that iOS/iPadOS stops trying to connect to it when it doesn't detect internet access on the drone wifi.

3) Disable Mobile Data. Even if you're connected to the drone wifi and getting an IP address - if Mobile Data is connected, the software will not be able to connect to the drone.

Now it should work. Good luck.

that the most precious commodity in the apocalypse would be toilet paper?

Newer versions of XenServer expect you to use vApps to handle virtual machines auto starting.  This may not be appropriate in some situations.

You can still enable auto-power on via command line on the host [https://support.citrix.com/article/CTX133910], however this just batch starts all VMs at once. You might like a bit more control, particularly if you require the VMs to start up in a particular order.

The method I'm about to explain may not be particularly elegant, but it works for my home lab.

Establish a shell connection to your XenServer / XCP-NG host via SSH or directly on the console. Using PuTTY or some other SSH client that allows you to copy and paste will be really helpful.

You should be in the root user's home directory /root. This should be the default directory you're dropped in to when you first establish your connection

  • otherwise just type cd and press enter to go straight there.

Create a new file called vm-autostart.sh with your favourite editor. I like vi as it's usually available.

Paste the following contents and modify the array called vms to suit:

#!/bin/bash

# xe vm-list for name-label, add in start order
vms=("VM1" "VM2" "VM3" "VM4" "VM5" "VM6" "VM7" "VM8" "VM9" "VM10")
wait=42s

# No need to modify below
initwait=2.5m
vmslength=${#vms[@]}
log=/root/vma.log

start_vm () {
   echo -n "[$(date +"%T")] Starting $1 ... " >> ${log}
   /opt/xensource/bin/xe vm-start name-label=$1
   if [ $? -eq 0 ] 
     then 
       echo "Success" >> ${log}
     else 
       echo "FAILED" >> ${log}
   fi

   # Wait if not the last vm
   if [ "$1" != "${vms[${vmslength}-1]}" ]
     then
       echo "Waiting ${wait}" >> ${log}
       sleep ${wait}
   fi
}

echo "[$(date +"%T")] Running autostart script (Waiting ${initwait})" > ${log}
sleep ${initwait}

for vm in ${vms[@]}
do
  start_vm ${vm}
done

echo "[$(date +"%T")] Startup complete." >> ${log}

The vms array takes the list of VM name-label properties. You can see them on the host if you run xe vm-list, or just take a look at your management software for the VM name. If you prefer to use the UUID, just modify the script accordingly.

The wait variable is set at 42 seconds. This is just slightly longer than it takes each of my vms to start up. You may require a bit longer, or you can set it a bit shorter. As my VMs all boot from the hosts local disks, I have the delay set so that there isn't so much contention for disk access. If you're booting from a storage array, then you might not require such a long delay.

The initwait variable is set at 2.5 minutes. This is to allow time for the toolstack to finish starting before trying to start the first VM. If the toolstack hasn't properly started before the first VM attempts to boot, the virtual machine will fail to start and you will have to start it manually. Subsequent machines will usually start, depending on the wait variable.

Save the script and quit the editor when you're happy with it. Remember to set the script to be executable with chmod a+x vm-autostart.sh.

Edit /etc/rc.d/rc.local with your favourite editor.

At the bottom of the file, add a call to your newly create and executable script:

/root/vm-autostart.sh

Save the file and quit the editor.

Make the rc.local script executable:

chmod a+x /etc/rc.d/rc.local

Next time your host restarts, your vms should start automatically. Remember to test the script manually by shutting down all your VMs and then running the script in the shell to see that you didn't inadvertently introduce any errors.

You can track the progress of the script. As soon as your host has rebooted, connect to the shell and either run tail -f /root/vma.log or run less /root/vma.log and press the F key to get the tail (Follow) function.

In the interest of having this information somewhere handy: Firstly, to successfully install VirtualBox Guest Additions within a server based image:

sudo apt-get install dkms build-essential linux-headers-`uname -r`

I discovered that if you clone an ubuntu server based image, networking stops functioning in the clone. The reason for this is that the new Machine assigns a new MAC address to the NIC. So the Udev rules think it's a new card, and assign it a new device id, like eth1, or eth2, etc.

To prevent this from happening, in your base image, edit /lib/udev/rules.d/75-persistent-net-generator.rules

Search for 'xen', and you'll see a block about ignoring the Xen virtual interfaces. Add the following below it:

 # ignore VirtualBox virtual interfaces
    ATTR{address}=="08:00:27:*", GOTO="persistent_net_generator_end"

Save it, and then remove the file /etc/udev/rules.d/70-persistent-net.rules.

Do the same thing in any cloned images with broken networking, and reboot the VMs.

You've been messing about in your VM, installing and removing software, etc. Your disk space usage inside your machine says you've only used 3Gb of disk space, yet your actual image is much bigger. How do you fix it?

It's fairly simple to fix. First, you should probably remove all the unnecessary crap that you never use. There's a bunch of utilities out there that can help you with this.

Defrag your drive using your favourite defragger.

Now you will need one of two possible utilities (There may be more, but these are the ones I'm aware of):

  • SDelete from http://www.sysinternals.com – this is a tiny 47kb executable.
  • Precompact.iso – Obtained from Microsoft Virtual Server installation (May be available in other MS VM products)

Precompact.iso is MUCH easier to use. All you need to do is mount the ISO inside your VM, and it will prepare your disk for compaction.

SDelete is only marginally more difficult. Run it in a command prompt like so: SDelete -c C: (or use whatever drive letter you want to compact). SDelete will write zeroes to the free space on your drive image. This allows compaction to take place properly.  Note – this is exactly what the Precompact.iso does, just without the fancy Windows GUI progress bar.

As soon as Precompact or SDelete is finished, shut down the VM.

Open command prompt on your host machine, and navigate to the folder where your hard disk images are located.

NOTE: It's probably MUCH easier to have the path to Virtual Box set in your PATH statement, otherwise you have to specify the full path to VBoxManage every time you use it.

In the command prompt, run:

[path to virtualbox]\VBoxManage modifyhd "Name Of Image.vdi" --compact

Your disk image will now be compacted, and should end up quite a bit smaller than it was. If you get an error about the disk image not being found blah blah, specify the FULL path to the image, like so:

[path to virtualbox]\VBoxManage modifyhd "C:\Users\Username\.VirtualBox\HardDisks\Name of Image.vdi" --compact