Author: Dave

  • My 2010 Yamaha XVS950a Midnight Star and its damned indicators

    My 2010 Yamaha XVS950a Midnight Star and its damned indicators

    Something a little random this time, for as long as I can remember, I have always wanted a cruiser to play with, to make my own so to speak. I’ve ridden for over 25 years now, but alas, since nearly dying in 2001 (thanks for not seeing me and pulling out on me Ford Fiesta) and then again breaking my wrist two years later as I caught an early frost white line on my way home from work (should have let go and not tried to save it … but 10kkkkkkkkkkk snap … couldn’t help myself LOL). Anyway, the third time is a charm as they say, so my kids were only just with us, well the first one at least, so I decided that then was not the time to take the chance (read: WIFE SAY NO!).

    Anyway, fast forward 20 years and here I am with a not new project to work on and it is definitely an experience. I managed to get this middle of the road cruiser at a really good price that would allow me the budget to do with it what I wanted to make it my own. First things first though, I stripped it of the pillion seat, sissy bar and panniers, which is where I found out the rear indicators were broken. No biggy, 10 mins on eBay later, new indicators winging their way over from China. It is probably pertinent to note at this point, I do not know what I am doing; I just want to do it!

    Indicators arrive, fit the rears, boom we have hyper flashing. That is where this story begins. Googling where the indicator relay was produce many many results, including various incorrect AI conclusions, at one point Google’s AI was giving me the details of the MT09, how that even remotely sounds like XVS950a I do not know. So that is the reason for this post, I had to do it myself, the 1100 vstar has the relay under the fuel tank, same with the 1300. The 650 has it behind the side cover. Nothing would tell me about the bloody bike that I had. It is actually under the seat latch on the near side (UK/Japan near side before any of you try and correct me), just in front of the tool kit.

    I realised after putting everything back together that I took this picture without the toolkit there … so the left (of the picture) of the indicator relay location is the tool kit.

    So you have to remove the seat catch to get at it, well, you undo the bolts and move it aside as it’s still attached to the key release on the side. Then you can get access to the indicator relay.

    There is next to now play on the cable connecting to the relay and it is nigh on impossible to get to without dropping it down. However, there the cable is connected to the frame by a small clip, using a trim removal tool, it will pry out without incident. Once that is popped out it is easy to move the relay and cable down into the swing arm to remove the relay and attach a new one. The new Chinese relay was an L-shaped relay, so I had to cut the cable, install female spade connectors and then connect those to the relay. Brown is for Battery, labelled B on the relay and Brown/White is the load, labelled L on the relay.

    In hindsight, my replacing the rear indicators only and then testing the first relay was a mistake, the relay resolved the hyper flashing temporarily, but then it gave up the ghost and I had to visit Amazon again, this time I ordered a different one. The new one however didn’t like operating a mix of indicators, when I first installed it the indicators just stayed on. As soon as I replaced the front indicators as well everything worked as expected. For some reason it feels like things always go the way of the pear instead of being really simple.

    Excuse the rusty stuff, everything is on my todo list, some I can do without procrastinating, some I can’t—I am currently in the process of procrastinating about which bits to take apart and when so as to take them to a paint shop, but I do always want to try painting things, I have never done it, so I I feel like I really should give it a go. I am also popping to the garage to sort out rear wheel alignment, the garage I frequent (TS Auto’s) is awesome, I have known the owner, Aaron as man and boy, I call him the “Car whisperer”. Well Aaron is going to let me have a go at it, laugh at me (very loudly I suspect) when I undoubtedly do something wrong and then help me do it properly! Once I have mastered that, I can then see about putting a wider tyre on it without changing the swing arm. Apparently that is something that is doable. I also wanted to replace the floorboards with forward controls (and remove the engine crash bar), but I am told the ones available for the 650 and the 1100, do not fit the 950a Midnight and there are none for the 950a … see what I mean, simple things going pear shaped again.

    Anyway, I hope this helps someone! I wish I had found something like this when I tried to find out what was what and where it actually was. I forgot to take a picture of looking under the tank, I did do that though.

    If anyone wants more on the progress of my bike then let me know, I will document everything little thing I do just to entertain you all 🙂

  • Wireguard continues .. Adding a Peer

    Wireguard continues .. Adding a Peer

    It quickly became apparent that I would like to add more than one peer to the configuration and stopping and restarting services seemed a bit of an overkill to solve the problem.

    Obviously Wireguard is still pretty much a manual solution, lets face it, it is supposed to be. If it was going to be a mammoth, then we might as well stick with the more standard solutions out there.

    Anyway, to add a Peer and for it to be available immediately, then this is what you have to do.

    Edit your config, usually /etc/wireguard/wg0.conf and add your new Peer to the configuration.

    SSH Config
    [Peer]PublicKey = 09m4KVG6uJ8tz7bW8vVsLiWTcnTePx8cafnucxLQdlM=AllowedIPs = 10.0.0.X/32

    Now we just need to force the reload of the config.

    ShellScript
    > wg syncconf wg0 <(wg-quick strip wg0)

    And now your server has a new peer that can connect to it.

    Now to create a Windows 11 Wireguard Peer, you have two choices, you can either add one manually (which I did, keeping with the theme and all that). See the Screenshot (redacted obviously, but you get the idea).

    A config file is pretty much the same thing, a simple text file with the same information in.

    SSH Config
    [Interface] PrivateKey = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXS3I=Address = 10.0.0.X/32DNS = 8.8.8.8 #extras can be added but must be comma separated[Peer]PublicKey = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXJDI=AllowedIPs = 0.0.0.0/0, ::/0Endpoint = A.B.C.D:51820

    Then just click Add Tunnel and point it at the text file to import.

    Anyway, signing off now, hope that helps.

  • Password Generator

    Password Generator

    This is the original generator as found in the Creating a LXD Backup Server post, I have decided to give it a nice home of it’s own. It is still in the original post along with the source easily copied for your use, abuse and pleasure.


    Password Generator by Dave Wise

     

     

    © Dave Wise 2022 . use and abuse . feel free to credit

    For anyone creating anything to do with passwords and php then invariably they will need to generate fallback hashes to get them out of trouble, well instead of having to write one, this generates a hash using <?php password_hash(‘<password>’, PASSWORD_DEFAULT ); ?> and lets you copy it until your hearts content.

    A password_hash Generator
    by Dave Wise

    This uses PASSWORD_DEFAULT as the hash generator

     

     

    © Dave Wise 2022 . use and abuse . feel free to credit

    If anyone is interested about the hash thing above, it’s proper simple, the only complicated bit was getting wordpress to do its thing with ajax calls.

    To start, I created a simple plugin that created an AJAX action that then returned the hash of the requested data to be hashed. Then created an empty javascript to queue, added a javascript var for the ajax url and that’s pretty much it. Well, I had to activate the plugin obviously. I didn’t go to town, I just wanted it to work.

    All delivered using Custom HTML blocks and given to you using Code Pro blocks.

    If it helps, all power to you … if it doesn’t … ah well, I tried #peaceout.

    PHP
    <?php
    /*
    Plugin Name: Hash Generator
    Description: A hash generator that uses password_hash() but allows access to the front end for people needing static hashes
    Author: Dave Wise
    Version: 0.1 
    */
    
    function get_hash() {
      $pass = $_POST['passwordInput'];
      echo password_hash($pass, PASSWORD_DEFAULT);
      wp_die();
    }
    
    add_action ( 'wp_ajax_nopriv_get_hash', 'get_hash' );
    add_action ( 'wp_ajax_get_hash', 'get_hash' );
    
    wp_register_script( 'hashGenScript', '');
    wp_enqueue_script( 'hashGenScript' );
    wp_add_inline_script( 'hashGenScript',
      'const hashGenVar = ' . json_encode( array ( 'ajaxURL' => admin_url('admin-ajax.php') )) . ';'
    );
    
    ?>
    HTML
    <script>
        function getHash() {
          const password = document.getElementById("passwordInput").value;
          if (password == "") alert('Text input cannot be empty you plank');
          else {
            var formData = new FormData();
            formData.append("action", "get_hash");
            formData.append("password", password);
    
            jQuery.ajax({
              type: "POST",
              url:  hashGenVar.ajaxURL,
              data: formData,
              cache: false,
              processData: false, 
              contentType: false, 
              success: function(newHash) {
                var hashBox = document.getElementById("hashBox");
                hashBox.innerText = newHash;
              }
            });
          }
        }
    
        function copyHash() {
          const textarea = document.createElement('textarea');
          const password = document.getElementById("hashBox").innerText;
          if (!password) { return; }
          textarea.value = password;
          document.body.appendChild(textarea);
          textarea.select();
          document.execCommand('copy');
          textarea.remove();
          alert('Password copied to clipboard');
        }
    </script>
    <div style='background: #C1E1C1; color: white; max-width: 60%; margin: 0 auto; font-family: -apple-system, "Segoe UI", Roboto, Oxygen-Sans, Ubuntu, Cantarell, "Helvetica Neue", sans-serif !important; margin: 10px 0 20px 0;'>
      <div style="padding: 30px !important">
      <h3 style="color: #417141" >A password_hash Generator <br /> by Dave Wise</h3>
      <p>This uses PASSWORD_DEFAULT as the hash generator</p>
      <input style="border: 1px solid #417141; border-radius: 5px; padding: 15px; outline: none; background: transparent;" type="text" name="passwordInput" id="passwordInput" placeholder="Enter the text to Hash"/>
      <div style="display: flex; align-items: center; justify-items: left; width: 100%; margin-bottom: 15px;">
        <p id="hashBox" style="overflow: hidden; padding: 10px 15px; border: 2px solid #f9f9f9; border-radius: 15px; font-size: 1.6rem; min-width: 50%; width: 90%;"> </p>
        <span onclick="copyHash()" style="font-size: 1.6rem; font-weight: bold; cursor: pointer;" class="dashicons dashicons-clipboard"></span>
      </div>
    
      <div style="display: flex; align-items: center; justify-items: left;" >
        <button style="width:60%; padding: 10px 15px; margin-left: 10px; border-radius: 10px; background: #417141; color: #ffffff; border: 0; outline: none; cursor: pointer" onclick="getHash()">Get Password Hash</button
      </div>
      <p>© Dave Wise 2024 . use and abuse . feel free to credit</p>
    </div>
    </div>
  • WireGuard – The quick and the dirty VPN

    WireGuard – The quick and the dirty VPN

    I have always been an ardent user of OpenVPN and StrongSWAN for the more standardised VPN solutions, but these come with their own headaches. Headaches such as client housekeeping, maintenance of a certificate authority, management of any external resources that may provide service to one of the elements, Directory Server or other LDAP Server and the numerous related protocols, Radius, TACACS, etc.

    I was discussing with a friend how nice it would be to have a simple knock it up and go VPN service, something that could be deployed very quickly for small teams and only had to be set up once and then forgotten about. Then the reply came, “Oh, you mean like WireGuard”?

    How had I not heard of this, there I was stuck in my routine of setting up Certificate Authorities and access services, I had completely missed the coming of WireGuard. So, without further ado I set about quickly setting up a VPN that I could attach my phones to while I was out and about.

    I created an Ubuntu server specifically for the task, then had a conversation in my head about ports (for a number of reasons, but mainly because where the server was situated, who knows what stuff the connectivity supplier blocks from little people like me using), and set about starting the installation of WireGuard.

    >sudo apt install wireguard

    Well that was tricky, not sure even I can cope with that level of complexity to be honest (ed. Let’s be honest, some days it’s remarkable you even remember you’re own name Dave).

    Then we need to have some available IP addresses for our little pool, I wrote a little script to make some fake IP ranges, well a fixed IPv4 range and a generated IPv6. I borrowed the concept from a post on Digital Ocean. It needs streamlining, but one of you lot can do that, I was just testing WireGuard done what I was imagining in my head.

    Bash
    #!/bin/sh
    # Script to create a random IPv6 Range based upon the time and machine ID
    # written by Dave Wise - 2024
    today=`date +%s%N`
    id=`cat /var/lib/dbus/machine-id`
    sha=`printf $today$id | sha1sum`
    rng=`printf $sha | cut -c 31-`
    fore=`printf $rng | cut -c 1-2`
    mid=`printf $rng | cut -c 3-6`
    end=`printf $rng | cut -c 7-`
    echo "IPv4 Network|10.8.0.0/24"
    echo "Server IPv4|10.8.0.1/24"
    echo "IPv6 Network|fd$fore:$mid:$end::/64"
    echo "Server IPv6|fd$fore:$mid:$end::1/64"

    This should give an output similar to the following

    Bash
    dave@royo:~# ./create-ips.sh 
    IPv4 Network|10.8.0.0/24
    Server IPv4|10.8.0.1/24                                                                                                                                  
    IPv6 Network|fd1c:3351:fcad::/64                                                                                                                         
    Server IPv6|fd1c:3351:fcad::1/64

    So there, we have some numbers to work with, now all we need to do is create the config file, create some allowable connections, referred to as “Peers” in WireGuard terminology and we should be good to go. In its most basic form, it is simply a case of creating public and private keys for every device (including the server) and then saying what is and what isn’t allowed to connect.

    So, lets do it. First we create the main configuration file for WireGuard on the server. For this we need to generate a Private and Public key pair

    Bash
    #!/bin/sh
    # A simple script that creates a private and public key inside the
    # WireGuard configuration directory.
    #
    # Author: Dave Wise (c) 2024
    # This script is required to be run as root
    wg genkey | tee /etc/wireguard/private.key | wg pubkey | tee /etc/wireguard/public.key &> /dev/null

    Now we create our main configuration file using the information we have just created.

    /etc/wireguard/wg0.conf
    [Interface]
    PrivateKey = qD7teL1xPmQuGFJ034TkvKqo7HNB1GGGFpsi1fgcLFE=
    Address = 10.8.0.1/24, fd1c:3351:fcad::1/64
    # ListenPort = 51820
    ListenPort = 989
    SaveConfig = true
    PostUp = iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE
    PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o eth1 -j MASQUERADE
    DNS = 8.8.8.8 8.8.4.4 2001:4860:4860::8888 2001:4860:4860::8844 

    A couple of points to note, I have changed the default port to hide behind the secure ftp port so those pesky blocking ISP’s will miss our packets, some will still be a pain, but this should get around most of them. I left the default port there so you can see it, who knows, you might want it. In theory you can also use port 123, the NTP port. I didn’t because I use mine.

    We must also make sure we have turned on net forwarding in sysctl.conf

    /etc/sysctl.conf
    net.ipv4.ip_forward=1
    net.ipv6.conf.all.forwarding=1

    You should then see the following output from sysctl.

    >sudo sysctl -p
    net.ipv6.conf.all.forwarding = 1
    net.ipv4.ip_forward = 1

    You can also see I added the masquerading to the WireGuard configuration which makes sure traffic gets routed when it needs to get routed. I don’t use UFW for anything, so I haven’t included any config info for it here. I will let you all do some more googling if you want to add support for it.

    Now to enable the service and run it, yes, I know there are currently no peers, but I like to know stuff loads and runs.

    Bash
    sudo systemctl enable wg-quick@wg0.service
    Bash
    sudo systemctl start wg-quick@wg0.service

    Now to check whether it is running and doing its thing.

    sudo systemctl status wg-quick@wg0.service
    wg-quick@wg0.service - WireGuard via wg-quick(8) for wg0
         Loaded: loaded (/lib/systemd/system/wg-quick@.service; enabled; vendor preset: enabled)
         Active: active (exited) since Wed 2021-08-25 15:24:14 UTC; 5s ago
           Docs: man:wg-quick(8)
                 man:wg(8)
                 https://www.wireguard.com/
                 https://www.wireguard.com/quickstart/
                 https://git.zx2c4.com/wireguard-tools/about/src/man/wg-quick.8
                 https://git.zx2c4.com/wireguard-tools/about/src/man/wg.8
        Process: 3245 ExecStart=/usr/bin/wg-quick up wg0 (code=exited, status=0/SUCCESS)
       Main PID: 3245 (code=exited, status=0/SUCCESS)
    
    Aug 25 15:24:14 wg0 wg-quick[3245]: [#] wg setconf wg0 /dev/fd/63
    Aug 25 15:24:14 wg0 wg-quick[3245]: [#] ip -4 address add 10.8.0.1/24 dev wg0
    Aug 25 15:24:14 wg0 wg-quick[3245]: [#] ip -6 address add fd1c:3351:fcad::1/64 dev wg0
    Aug 25 15:24:14 wg0 wg-quick[3245]: [#] ip link set mtu 1420 up dev wg0
    Aug 25 15:24:14 wg0 wg-quick[3279]: Rule added (v6)
    Aug 25 15:24:14 wg0 wg-quick[3245]: [#] iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eno1 -j MASQUERADE
    Aug 25 15:24:14 wg0 wg-quick[3245]: [#] ip6tables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eno1 -j MASQUERADE
    Aug 25 15:24:14 wg0 systemd[1]: Finished WireGuard via wg-quick(8) for wg0.

    So everything on the server is running, a quick check to make sure it is actually listening to the real world for connections using netstat.

    netstat -tunlp
    Active Internet connections (only servers)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
    tcp        0      0 127.0.0.1:6010          0.0.0.0:*               LISTEN      6346/sshd: dave@pts 
    tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      695/systemd-resolve 
    tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      983/sshd: /usr/sbin 
    tcp        0      0 127.0.0.1:3306          0.0.0.0:*               LISTEN      786/mariadbd        
    tcp6       0      0 :::22                   :::*                    LISTEN      983/sshd: /usr/sbin 
    tcp6       0      0 :::80                   :::*                    LISTEN      1017/apache2        
    tcp6       0      0 ::1:6010                :::*                    LISTEN      6346/sshd: dave@pts 
    tcp6       0      0 :::443                  :::*                    LISTEN      1017/apache2        
    udp        0      0 0.0.0.0:989             0.0.0.0:*                           -                           
    udp        0      0 127.0.0.53:53           0.0.0.0:*                           695/systemd-resolve 
    udp6       0      0 :::989                  :::*                                -                   

    There it is listening on UDP/UDP6 port 989, obviously the “nosey” among you can see I am also running Apache and MariaDB on there, but that’s for something else which I may or more likely, may not document.

    Now we have to move onto creating our clients or Peers, our peers also require key pairs, so we extend out tiny little script a little to create specific pairs in the current directory by adding a parameter to the call, otherwise it does the default (original) behaviour of creating a key pair in the WireGuard config directory. (ed. You should probably do more safety checks here Dave, you don’t want to accidentally overwrite something like a muppet!)

    Bash
    #!/bin/sh
    # A simple script that creates a private and public key inside the
    # WireGuard configuration directory.
    #
    # Author: Dave Wise (c) 2024
    # This script is required to be run as root
    
    if [ -n "$1" ]; then
      wg genkey | tee "wg-client-$1-private.key" | wg pubkey | tee "wg-client-$1-public.key" &> /dev/null
    else
      wg genkey | tee /etc/wireguard/private.key | wg pubkey | tee /etc/wireguard/public.key &> /dev/null
    fi

    For the purposes of this exercise, we will create a test client key pair and then add them to the WireGuard configuration.

    Bash
    > create-keys test
    > ls -l
    -rw------- 1 root root   90 May  6  2024 wg-client-test-private.key
    -rw-r--r-- 1 root root   90 May  6  2024 wg-client-test-public.key

    I should probably create a script for creating a client that adds it to the WireGuard peer configuration, should be a simple process really, it should create the keys, add them to the wg0.conf, create a config file for the client end, and if you’re using a phone to connect (as I am in this case) create a QR Code for the WireGuard phone app to read the configuration from. The Linux app QR Encode should do that. Obviously this becomes a major software project to keep track of allocated IP addresses etc etc, which is not what this exercise is about. It’s mainly to prove it works, start the process of automating stuff with scripts. Tell everyone about it and let them use whatever they need from my ramblings and findings to extend their own tools and projects. But remember people, I do love to hear from everyone that finds this stuff interesting and useful. I just created a client directory to run everything in and then keep a track of the last number added. Some housekeeping would obviously be required, but really, how complicated do I want to make it? (ed. Seriously? Half a job Dave pfff)

    Bash
    #!/bin/sh
    # Create WireGuard Client script
    # Author: Dave Wise © 2024
    #
    # Edit the variables in the environment section to suit your own requirements
    # for your endpoints, servers and available IP ranges.
    #
    
    # [Environment Section] --------
    endpoint="a.b.c.d:989"
    dns="8.8.8.8 8.8.4.4 2001:4860:4860::8888 2001:4860:4860::8844"
    ip4prefix="10.8.0."
    ip6prefix="fd1c:3351:fcad::"
    serverkey="91UNV/Xq7r801XzrWIhUsLuwWLIJn4kS6svnax9fRGw="
    # [/Environment Section] -------
    
    if [ ! -f ./.lastadded ]; then
      # Never start at 1 .. always assume the server is at 1 
      toadd=2
    else
      lastadded=`cat ./.lastadded`
      toadd=$((lastadded + 1))
    fi
    
    ip4="$ip4prefix$toadd"
    ip6="$ip6prefix$toadd"
    
    if [ -n "$1" ]; then
      wg genkey | tee "wg-client-$1-private.key" | wg pubkey | tee "wg-client-$1-public.key" > /dev/null
      private=`cat wg-client-$1-private.key`
      public=`cat wg-client-$1-public.key`
      if [ $(id -u) -ne 0 ]
      then echo "Please run as root"
      else
        echo "Adding $1 to WireGuard"
        echo "wg set wg0 peer $public allowed-ips $ip4 $ip6"
        # wg set wg0 peer $public allowed-ips $ip4 $ip6
    
        # Create config file
        echo "Creating Config file for $1"
        printf "[Interface]\nAddress = $ip4/32\nAddress = $ip6/64\nPrivateKey = $private\nDNS = $dns\n\n[Peer]\nPublicKey = $serverkey\nEndpoint = $endpoint\nAllowedIPs = 0.0.0.0/0, ::/0\n" > wg-client-$1-config.conf
    
        # IF there is a config file, then create the QR Code for it
        configfile="wg-client-$1-config.conf"
        if [ -f "$configfile" ]; then
          echo "Creating QR Code for $1"
          qrcodefile="wg-client-$1-qrcode.png"
          qrencode --read-from="$configfile" -o $qrcodefile
          if [ ! -f "$qrcodefile" ]; then
            echo "[Failed] Could not create QR Code $qrcodefile"
          fi
        else
          echo "[Failed] Could not create config file $configfile"
        fi
    
        echo $toadd > .lastadded
      exit
    fi
    else
      echo "You must supply a client name as the parameter, otherwise, whats the point?"
    fi

    That’s everything on the server side done, now to install WireGuard on the phones, scan the QR Codes and connect and that is pretty much all there is to it. So there you have it, it really is a quick and simple VPN implementation that is still encrypted, has a low impact and works. I connect to mine via iPhone, Android Phone (gotta have something test stuff with), Linux VM and Windows 11 without issue.

    If you haven’t got qrencode on your system, then simply add it.

    >sudo apt install qrencode

    You really can’t screw that one up either. Here’s some iphone screenshots I have found online because I forgot to grab mine as I added it.

    I hope this helps someone, I hope it inspires someone to create their own simple VPN’s with WireGuard. It really is a cool thing.

    #peaceout

  • Quick reminder for NGINX and using HTACCESS for password protecting a file

    Quick reminder for NGINX and using HTACCESS for password protecting a file

    Obviously this is easy enough to look up on google these days, but I thought I would write it here so I remember where to find it. If it helps anyone else along the way then that is a double bonus as it were.

    Nginx
    server {
      listen 443 ssl;
      server_name servername;
      client_max_body_size    64M;
    
      ...
      
      index index.php index.html;
    
      # The path is relative to the site root, you can
      # also do this to directories, but don't forget to
      # check the fastcgi settings.
      
      location ~ /filename.ext {
        try_files $uri $uri/ =404;
        auth_basic           "Administrator’s Only";
        auth_basic_user_file /path/to/.htpasswd;
      }
    }
    

    I hope that helps, it always helps me when I write stuff down anyway 🙂

    #peaceout

  • Kali Linux in VirtualBox … the constant “Brain-Ache” with Guest Additions

    Kali Linux in VirtualBox … the constant “Brain-Ache” with Guest Additions

    I appreciate there are a plethora of posts about this particular subject online, ranging from the simple to the sublime, however I just need something to remember the basic order of things and something to remind me of the component parts.

    Let’s start with what we don’t need:

    • virtualbox-guest-dkms
    • virtualbox-guest-utils
    • virtualbox-guest-x11

    So we must remove them

    ShellScript
    sudo apt remove -y virtualbox-guest-x11 virtualbox-guest-dkms virtualbox-guest-utils

    Now let’s remember what we do need:

    • Linux Headers
    • build-essential
    • dkms
    ShellScript
    apt install linux-headers-$(uname -r) build-essential dkms

    Right, now onto the more VirtualBox specific stuff, because there are guest additions for every version that is released (hence this is a “Brain-Ache”, because you have to remember every time you upgrade the kernel).

    Edit (m-a may be required, I run it anyway, just in case)

    Nginx
    # m-a obtains any required headers that may be missing for the compilation of everything on your systemm-a prepare

    First lets download our version of guest additions, check your version of VirtualBox in the about, you will see the version number you need, in the image you will see it is 7.0.20.

    So you will need to replace the x.x.x below with your version.

    ShellScript
    wget https://download.virtualbox.org/virtualbox/x.x.x/VBoxGuestAdditions_x.x.x.iso

    So we now have the correct version of the guest additions image, now we need to install them.

    ShellScript
    sudo mkdir /media/isosudo mount /path/to/iso/VBoxGuestAdditions_x.x.x.iso /media/iso -o loopsudo /media/iso/VBoxLinuxAdditions.run

    Because I happen to keep several kernel versions on my machine I build the guest additions for all of them using the rcvboxadd command.

    ShellScript
    sudo /sbin/rcvboxadd quicksetup all

    But you can replace the all with nothing at all and it will use the current kernel or you can replace it with a specific kernel by supplying the version number, eg. 6.8.11

    There you go, simples and remembered for posterity, hopefully someone else will read this and think .. ‘Hey, that’s proper simple, just what I needed’. You’re welcome.

    Click here for the Kali Dragon feature image details

    The feature Kali Dragon image is a desktop image that can be downloaded from https://wallpapers.com/kali-linux

  • Oracle versus Oracle … and yes it is as sad as it sounds

    Oracle versus Oracle … and yes it is as sad as it sounds

    Well, well, well … a chat to a colleague and I find myself fighting with Oracle Linux .. I started fighting with version 9.3, but alas, 9.3 doesn’t actually run Oracles own f’ing database … well of course it wouldn’t, why would it? I had to downgrade to 8.9, just time wasted, lots of my time wasted to nonsense. This isn’t meant to be a “how-to” or “what if” set of instructions, this is just an encounter I have had with apparently unbreakable software.

    My internet speed isn’t the best here (if I got any further from the exchange, I would be signing up for star-link :/) and 9.3 was 10gb and 8.9 was 12.3gb. Bigger than an average 4.7gb DVD size, but DVD image’s they were, it said so in the file names.

    ShellScript
    > ls downloads/>   OracleLinux-R8-U9-x86_64-dvd.iso

    Thankfully installing the OS’s into Virtual Box (of course, another Oracle product) was relatively painless. However, there were glitches installing the VBOX guest additions. The required kernel headers aren’t installed by default … go figure … I need someone to remind me why Oracle have this market share.

    But after that, it all just goes downhill. As I mentioned, after my mistake of assuming something simple, the latest version of the Oracle Database Free (23C) doesn’t actually run, neigh, doesn’t even install on the latest version of Oracle’s own operating system. I fought with it for a while thinking I must be crazy or something, who would develop and release an operating system that did not run their very own championed and celebrated product. Clearly Oracle do, so upon conceding an undesirable defeat, I deleted the VM and started again with the Oracle Linux 8.9 DVD image.

    Then there is more time lost hunting around for the RPMs you are going to need to install. Here I needed the database install file and the RPMs for the clients, e.g. sqlplus.

    You can get them from here: https://www.oracle.com/database/technologies/free-downloads.html

    ShellScript
    ls -l *.rpm-rw-r--r--. 1 1751573448 Sep 11 14:59 oracle-database-free-23c-1.0-1.el8.x86_64.rpm-rw-rw-r--. 1 55788060 Oct 19 09:52 oracle-instantclient-basic-21.12.0.0.0-1.el8.x86_64.rpm-rw-rw-r--. 1 728100 Oct 19 09:51 oracle-instantclient-sqlplus-21.12.0.0.0-1.el8.x86_64.rpm

    I truly do not understand why are they not included with the OS, these bits should come as standard with the operating system because I bet dollars to doughnuts, that if someone just wants a Linux distribution they will not be installing Oracle’s offering out of personal choice. So Oracle needs to sort that out, and on top of that, they need to be making sure all of their products work together before releasing it into the wild.

    When I first wrote up my notes (yup, these are my notes, I remember things by making them feel funnier than they actually are), I wrote a whole section about installing the database using dnf, using sqlplus to look inside what had just installed etc … however … I seem to have deleted those parts of my notes when I changed the order around of some of the funnies … of course, this adds to the whole scenario somewhat and is perfectly in keeping with everything that has happened thus far in the process.

    But to give an oversight of what’s what, we need to understand that Oracle uses the term Database somewhat differently to what has now become “normalised” thinking. It is actually more akin to the good old days of DBase3+/Clipper or the beef cake IBM Db2 (although this is still in existence but significantly improved). Anyway, the term Database in Oracle terms is the server/software combo, upon with is going to sit the data store. It is this data store that is more akin to the usual suspects such as MySQL, PostgreSQL, MariaDB etc.

    In Oracle Free 23c’s case, it creates a FREE file store, with a CDB and the ability to create up to 16 Plug-able Databases. It is these plug-able databases that are you equivalent to a default MySQL installation. Inside the PDB you then create your table spaces and then your data structures and stored procedures. Hopefully that all makes sense to you, if not, perhaps you should not be reading further, just in case you break something? (LOL Of course I am only kidding, like I care if you break something or not, gotta keep breaking things to be able un-break them and learn after all.) Anyway, when you install the software there are two ways you can go, create an oracle user and build its environment based directly, or you can create an overriding admin (instead of sys as sysdbm). I of course decided to make it more complicated for myself by taking the second of the two paths.

    So, first add your nice shiny new PDB with your chosen user/pass combo that you specified at install.

    ShellScript
    sqlplus sys/adminpassword@//localhost:1521/FREE
    SQL
    CREATE PLUGGABLE DATABASE monopdb  ADMIN USER monouser IDENTIFIED BY monopassword    ROLES = (dba)  DEFAULT TABLESPACE library  DATAFILE '/opt/oracle/product/23c/dbhomeFree/dbs/monopdb/library01.dbf' SIZE 250M AUTOEXTEND ON  STORAGE (MAXSIZE 2G)  PATH_PREFIX = '/opt/oracle/product/23c/dbhomeFree/dbs/monopdb/';

    That then creates a plugable database that can be used like any other database, with it’s own files, users, etc without interference. It’s quite interesting that you can still create separate paths for database files, well, maybe interesting is not the right word. I am not sure it is entirely required any more with seamless storage and cloud storage facilities, but still natty all the same. Now it is possible to log in as that user, but there are some other hurdles we need to overcome first. Of course there are, this is an Oracle install after all.

    Plugable databases, or PDB’s, are created as mounted only, they aren’t open for access in any way shape or form. In version 12 of oracle you had to create a trigger on database start to reopen the pdb’s every time, however the newer versions do allow for saving the state, so you only have to open it once and save the state. the trigger does also still work though if you want to add it anyway.

    SQL
    SQL> show pdbs;
    SQL
    CON_ID  CON_NAME                     OPEN MODE  RESTRICTED------  ---------------------------  ---------  ----------     4  MONOPDB                      MOUNTED
    SQL
    ALTER PLUGGABLE DATABASE monopdb OPEN;
    SQL
    CON_ID  CON_NAME                     OPEN MODE  RESTRICTED------  ---------------------------  ---------  ----------     4  MONOPDB                      READ WRITE NO
    SQL
    ALTER PLUGGABLE DATABASE monopdb SAVE STATE;

    As I said, the trigger still works if you would rather implement that over doing things manually, not sure I understand why, but here it is anyway.

    SQL
    CREATE OR REPLACE TRIGGER open_pdbs  AFTER STARTUP ON DATABASEBEGIN  EXECUTE IMMEDIATE 'ALTER PLUGGABLE DATABASE ALL OPEN';END open_pdbs;/

    Right, so we have an accessible PDB . woohoo! We need to adjust that users’ permissions so it can create things in the nice shiny new database.

    We can check out user’s privileges by logging into sqlplus as them, presumably we remember the new password we assigned when we created the PDB? You must have written it down on a post-it note and stuck it to your screen surely? Wait, what year is it again?

    SQL
    sqlplus user/password@//localhost:1521/monopdb SQL> select * from session_roles;ROLE ---------------------------------------------------PDB_DBAWM_ADMIN_ROLE

    As you can see, it is missing the ability to do anything other than log in, really, really useful default state. Anyway …

    Firstly, as the CBD system user, you need to make sure you are in the PDB domain for this session otherwise when you try and do anything it will say “no such user”. It’s helpful like that, it has very intelligent error messages that are almost impossible to misinterpret … that is largely to do with the fact that they are completely unintelligible to start with.

    SQL
    SQL> alter session set container=monopdb Session altered.

    Now let us add some useful abilities for this pup, like connecting and creating stuff, also being able to store stuff in the said things created would probably be a good idea to be fair, but you never really can tell.

    Now we need to reconnect as the CDB system user and let the magic begin.

    SQL
    SQL> grant connect resource …………………………

    Wait, I am not sure what is or isn’t needed, I think I will be better, in the first instance, to grant everything and then reverse out stuff that’s not needed.

    SQL
    SQL> grant ALL privileges to monouser;Grant succeeded.

    Note:It complained with ALL was in lowercase, but worked when it was capitalised, go figure, that will be another little idiosyncrasy of Oracle then I guess.

    I am not entirely sure that which one’s are needed at this point, so I am currently taking a belts and braces approach and just making stuff work, I can always revoke stuff after the fact.

    In case you needed to know, revoking is as simple as granting.

    SQL
    SQL> revoke create session from PDBUSER;Revoke succeeded. // See, f&*k you PDB user .. Muwahahahaha

    Right, so we have now updated our user responsible for the PDB, we can check what they can now do inside their little world.

    SQL
    > sqlplus monouser/password@//localhost:1521/monopdb SQL*Plus: Release 23.0.0.0.0 - Production on Mon Feb 5 13:39:10 2024Version 23.3.0.23.09Copyright (c) 1982, 2023, Oracle.  All rights reserved.Last Successful login time: Mon Feb 05 2024 13:23:34 +00:00Connected to:Oracle Database 23c Free Release 23.0.0.0.0 - Develop, Learn, and Run for FreeVersion 23.3.0.23.09SQL> show user;USER is "MONOUSER"SQL> select * from session_roles;ROLE--------------------------------------------------------------------------------CONNECTRESOURCESODA_APPPDB_DBAWM_ADMIN_ROLE

    Of course, we can also check the privileges granted, but bare in mind we granted everything to the user and haven’t as yet, revoked anything.

    SQL
    SQL> select * from session_privs;
    PRIVILEGE

    EXECUTE ANY DOMAIN
    DROP ANY DOMAIN
    ALTER ANY DOMAIN
    CREATE ANY DOMAIN
    CREATE DOMAIN
    ADMINISTER REDACTION POLICY
    ADMINISTER FINE GRAINED AUDIT POLICY
    ADMINISTER ROW LEVEL SECURITY POLICY
    DROP ANY MLE
    ALTER ANY MLE
    CREATE ANY MLE

    PRIVILEGE

    CREATE MLE
    READ ANY PROPERTY GRAPH
    DROP ANY PROPERTY GRAPH
    ALTER ANY PROPERTY GRAPH
    CREATE ANY PROPERTY GRAPH
    CREATE PROPERTY GRAPH
    DROP LOGICAL PARTITION TRACKING
    CREATE LOGICAL PARTITION TRACKING
    DROP ANY ANALYTIC VIEW
    ALTER ANY ANALYTIC VIEW
    CREATE ANY ANALYTIC VIEW

    PRIVILEGE

    CREATE ANALYTIC VIEW
    DROP ANY HIERARCHY
    ALTER ANY HIERARCHY
    CREATE ANY HIERARCHY
    CREATE HIERARCHY
    DROP ANY ATTRIBUTE DIMENSION
    ALTER ANY ATTRIBUTE DIMENSION
    CREATE ANY ATTRIBUTE DIMENSION
    CREATE ATTRIBUTE DIMENSION
    READ ANY TABLE
    ALTER ANY CUBE BUILD PROCESS

    PRIVILEGE

    SELECT ANY CUBE BUILD PROCESS
    ALTER ANY MEASURE FOLDER
    SELECT ANY MEASURE FOLDER
    EXECUTE DYNAMIC MLE
    USE ANY JOB RESOURCE
    LOGMINING
    CREATE ANY CREDENTIAL
    CREATE CREDENTIAL
    ALTER LOCKDOWN PROFILE
    DROP LOCKDOWN PROFILE
    CREATE LOCKDOWN PROFILE

    PRIVILEGE

    SET CONTAINER
    CREATE PLUGGABLE DATABASE
    FLASHBACK ARCHIVE ADMINISTER
    DROP ANY SQL TRANSLATION PROFILE
    USE ANY SQL TRANSLATION PROFILE
    ALTER ANY SQL TRANSLATION PROFILE
    CREATE ANY SQL TRANSLATION PROFILE
    CREATE SQL TRANSLATION PROFILE
    ADMINISTER SQL MANAGEMENT OBJECT
    UPDATE ANY CUBE DIMENSION
    UPDATE ANY CUBE BUILD PROCESS

    PRIVILEGE

    SET CONTAINER
    CREATE PLUGGABLE DATABASE
    FLASHBACK ARCHIVE ADMINISTER
    DROP ANY SQL TRANSLATION PROFILE
    USE ANY SQL TRANSLATION PROFILE
    ALTER ANY SQL TRANSLATION PROFILE
    CREATE ANY SQL TRANSLATION PROFILE
    CREATE SQL TRANSLATION PROFILE
    ADMINISTER SQL MANAGEMENT OBJECT
    UPDATE ANY CUBE DIMENSION
    UPDATE ANY CUBE BUILD PROCESS

    PRIVILEGE

    DROP ANY CUBE BUILD PROCESS
    CREATE ANY CUBE BUILD PROCESS
    CREATE CUBE BUILD PROCESS
    INSERT ANY MEASURE FOLDER
    DROP ANY MEASURE FOLDER
    DELETE ANY MEASURE FOLDER
    CREATE ANY MEASURE FOLDER
    CREATE MEASURE FOLDER
    UPDATE ANY CUBE
    SELECT ANY CUBE
    DROP ANY CUBE

    PRIVILEGE

    CREATE ANY CUBE
    ALTER ANY CUBE
    CREATE CUBE
    SELECT ANY CUBE DIMENSION
    INSERT ANY CUBE DIMENSION
    DROP ANY CUBE DIMENSION
    DELETE ANY CUBE DIMENSION
    CREATE ANY CUBE DIMENSION
    ALTER ANY CUBE DIMENSION
    CREATE CUBE DIMENSION
    COMMENT ANY MINING MODEL

    PRIVILEGE

    ALTER ANY MINING MODEL
    SELECT ANY MINING MODEL
    DROP ANY MINING MODEL
    CREATE ANY MINING MODEL
    CREATE MINING MODEL
    EXECUTE ASSEMBLY
    EXECUTE ANY ASSEMBLY
    DROP ANY ASSEMBLY
    ALTER ANY ASSEMBLY
    CREATE ANY ASSEMBLY
    CREATE ASSEMBLY

    PRIVILEGE

    ALTER ANY EDITION
    DROP ANY EDITION
    CREATE ANY EDITION
    CREATE EXTERNAL JOB
    CHANGE NOTIFICATION
    CREATE ANY SQL PROFILE
    ADMINISTER ANY SQL TUNING SET
    ADMINISTER SQL TUNING SET
    ALTER ANY SQL PROFILE
    DROP ANY SQL PROFILE
    SELECT ANY TRANSACTION

    PRIVILEGE

    MANAGE SCHEDULER
    EXECUTE ANY CLASS
    EXECUTE ANY PROGRAM
    CREATE ANY JOB
    CREATE JOB
    ADVISOR
    EXECUTE ANY RULE
    DROP ANY RULE
    ALTER ANY RULE
    CREATE ANY RULE
    CREATE RULE

    PRIVILEGE

    IMPORT FULL DATABASE
    EXPORT FULL DATABASE
    EXECUTE ANY RULE SET
    DROP ANY RULE SET
    ALTER ANY RULE SET
    CREATE ANY RULE SET
    CREATE RULE SET
    EXECUTE ANY EVALUATION CONTEXT
    DROP ANY EVALUATION CONTEXT
    ALTER ANY EVALUATION CONTEXT
    CREATE ANY EVALUATION CONTEXT

    PRIVILEGE

    CREATE EVALUATION CONTEXT
    GRANT ANY OBJECT PRIVILEGE
    FLASHBACK ANY TABLE
    DEBUG ANY PROCEDURE
    DEBUG CONNECT ANY
    DEBUG CONNECT SESSION
    RESUMABLE
    ON COMMIT REFRESH
    MERGE ANY VIEW
    ADMINISTER DATABASE TRIGGER
    ADMINISTER RESOURCE MANAGER

    PRIVILEGE

    DROP ANY OUTLINE
    ALTER ANY OUTLINE
    CREATE ANY OUTLINE
    DROP ANY CONTEXT
    CREATE ANY CONTEXT
    DEQUEUE ANY QUEUE
    ENQUEUE ANY QUEUE
    MANAGE ANY QUEUE
    DROP ANY DIMENSION
    ALTER ANY DIMENSION
    CREATE ANY DIMENSION

    PRIVILEGE

    CREATE DIMENSION
    UNDER ANY TABLE
    EXECUTE ANY INDEXTYPE
    GLOBAL QUERY REWRITE
    QUERY REWRITE
    UNDER ANY VIEW
    DROP ANY INDEXTYPE
    ALTER ANY INDEXTYPE
    CREATE ANY INDEXTYPE
    CREATE INDEXTYPE
    EXECUTE ANY OPERATOR

    PRIVILEGE

    DROP ANY OPERATOR
    ALTER ANY OPERATOR
    CREATE ANY OPERATOR
    CREATE OPERATOR
    EXECUTE ANY LIBRARY
    DROP ANY LIBRARY
    ALTER ANY LIBRARY
    CREATE ANY LIBRARY
    CREATE LIBRARY
    UNDER ANY TYPE
    EXECUTE ANY TYPE

    PRIVILEGE

    DROP ANY TYPE
    ALTER ANY TYPE
    CREATE ANY TYPE
    CREATE TYPE
    DROP ANY DIRECTORY
    CREATE ANY DIRECTORY
    DROP ANY MATERIALIZED VIEW
    ALTER ANY MATERIALIZED VIEW
    CREATE ANY MATERIALIZED VIEW
    CREATE MATERIALIZED VIEW
    GRANT ANY PRIVILEGE

    PRIVILEGE

    ANALYZE ANY
    ALTER RESOURCE COST
    DROP PROFILE
    ALTER PROFILE
    CREATE PROFILE
    DROP ANY TRIGGER
    ALTER ANY TRIGGER
    CREATE ANY TRIGGER
    CREATE TRIGGER
    EXECUTE ANY PROCEDURE
    DROP ANY PROCEDURE

    PRIVILEGE

    ALTER ANY PROCEDURE
    CREATE ANY PROCEDURE
    CREATE PROCEDURE
    FORCE ANY TRANSACTION
    FORCE TRANSACTION
    ALTER DATABASE
    AUDIT ANY
    ALTER ANY ROLE
    GRANT ANY ROLE
    DROP ANY ROLE
    CREATE ROLE

    PRIVILEGE

    DROP PUBLIC DATABASE LINK
    CREATE PUBLIC DATABASE LINK
    CREATE DATABASE LINK
    SELECT ANY SEQUENCE
    DROP ANY SEQUENCE
    ALTER ANY SEQUENCE
    CREATE ANY SEQUENCE
    CREATE SEQUENCE
    DROP ANY VIEW
    CREATE ANY VIEW
    CREATE VIEW

    PRIVILEGE

    DROP PUBLIC SYNONYM
    CREATE PUBLIC SYNONYM
    DROP ANY SYNONYM
    CREATE ANY SYNONYM
    CREATE SYNONYM
    DROP ANY INDEX
    ALTER ANY INDEX
    CREATE ANY INDEX
    DROP ANY CLUSTER
    ALTER ANY CLUSTER
    CREATE ANY CLUSTER

    PRIVILEGE

    CREATE CLUSTER
    REDEFINE ANY TABLE
    DELETE ANY TABLE
    UPDATE ANY TABLE
    INSERT ANY TABLE
    SELECT ANY TABLE
    COMMENT ANY TABLE
    LOCK ANY TABLE
    DROP ANY TABLE
    BACKUP ANY TABLE
    ALTER ANY TABLE

    PRIVILEGE

    CREATE ANY TABLE
    CREATE TABLE
    DROP ROLLBACK SEGMENT
    ALTER ROLLBACK SEGMENT
    CREATE ROLLBACK SEGMENT
    DROP USER
    ALTER USER
    BECOME USER
    CREATE USER
    UNLIMITED TABLESPACE
    DROP TABLESPACE

    PRIVILEGE

    MANAGE TABLESPACE
    ALTER TABLESPACE
    CREATE TABLESPACE
    RESTRICTED SESSION
    ALTER SESSION
    CREATE SESSION
    AUDIT SYSTEM
    ALTER SYSTEM

    250 rows selected.

    This now looks infinitely more promising, the new user and PDB should be able to be used with abandon. Let us start by creating some tablespace for our new database tables, procedures and whatever else we may need along the way.

    So we are now in a position where we can create our very first table space, well, we can hope that we are at least. Without further ado.

    SQL
    SQL> create tablespace monodb  2  datafile 'monodb.dbf' size  500k reuse  3  autoextend on next 500k maxsize 100m;Tablespace created.SQL> alter pluggable database default tablespace monodb;Pluggable database altered.

    A simple database that can grow as needed, that will do for these purposes and making it work and giving us somewhere to store stuff.

    But what we have now confirmed at least, with the free version of Oracle 23c, we can make use of up to 16 PDB’s with their own user space without risk of bleed into other PDB spaces. More of the model we are used to with the usual suspects, MySQL, Maria, Postgres etc …

    Of course the testing continues …

    We will have to now create some of the usual tables etc in which we can store some stuff, why have a database with nothing in it? Well who knows, someone out there might want one, perhaps it looks pretty?

    We will make a Book Library, because it was either that or a contact database, but as you can probably tell from my ramblings, I don’t know that many people to put in it LOL!

    SQL
    SQL> create table books (  2    id      number     generated by default on NULL as IDENTITY,  3    title   varchar(256) not null,  4    genre   varchar(20),  5    author  varchar(100),  6    isbn    varchar(13) unique  7  ) tablespace monodb;Table created.SQL> describe books; Name                                      Null?    Type ----------------------------------------- -------- ---------------------------- ID                                        NOT NULL NUMBER TITLE                                              VARCHAR2(256) GENRE                                              VARCHAR2(20) AUTHOR                                             VARCHAR2(100) ISN                                                VARCHAR2(13)

    We have, despite Oracle’s best efforts to thwart my progress, a database with a single table exists and it is ready to populate. I don’t want to use WebLogic … I didn’t like it in 2000 … I am not about to start liking it now. I suspect there must be a PHP PDO driver out there waiting for me to find it and install to test it further.

    But before we do all of that, I want to be able to use Ansible to create backups of the databases and server, but interestingly, I should have thought about this before starting anything because … it needs to have ARCHIVELOG enabled for backups and restores to work … of course it does.

    SQL
    SQL> alter database close;SQL> alter database archivelog;SQL> alter database open;

    In my case, for some unknown reason I had to restart the oracle services at this point, this managed to get the databases and PDBs opened in READ / WRITE mode again, but with ARCHIVELOG on.

    SQL
    SQL> archive log listDatabase log mode              Archive ModeAutomatic archival             EnabledArchive destination            /opt/oracle/product/23c/dbhomeFree/dbs/archOldest online log sequence     21Next log sequence to archive   21Current log sequence           20

    So with that done, it should now be possible to use rman ro create backups, so let’s test that theory manually. If this works then creating a script similar to this that can be called by an Ansible script (sorry, ffs, Playbook, so many new words for the same shit, in this case YAML because why use one mark-up language when you can invent another one to do the same thing?) should be relatively straight forward.

    ShellScript
    >rman target sys/password@//localhost:1521/free> backup database2>  format "/home/oracle/backups/backup_%U";Starting backup at 23-FEB-24using target database control file instead of recovery catalogallocated channel: ORA_DISK_1channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01Finished backup at 23-FEB-24[oracle@mono backups]$ ls -latotal 3324920drwxr-xr-x. 2 oracle oinstall       4096 Feb 23 15:58 .drwx------. 8 oracle oinstall       4096 Feb 23 14:59 ..-rw-r-----. 1 oracle oinstall 1634172928 Feb 23 15:58 backup_012jse1u_1_1_1-rw-r-----. 1 oracle oinstall  627597312 Feb 23 15:58 backup_022jse21_2_1_1-rw-r-----. 1 oracle oinstall  635641856 Feb 23 15:58 backup_032jse22_3_1_1-rw-r-----. 1 oracle oinstall  507281408 Feb 23 15:59 backup_042jse23_4_1_1

    And there we have a backup of everything, happy days.

    Please note I want to backup everything, so I use the system user and password and connect to the actual core instance, this then allows me to backup every PDB and option.

    To be quite honest, I got extremely bored by this point, I have managed to get Oracle Free 23c installed on Oracle Linux 8.9, I have created a PDB instance with its own admin user (granted the user is all powerful at this point and needs to have its wings clipped) and created a table space and basic data structure within it to store data in. Followed by making it work with rman to create backups that can be restored and in such a way as a script (sorry, playbook) can be written in YAML for Ansible to be able to do the backups remotely and in it’s own time without relying on cron or any other local system.

    If you want me to provide a script and YAML example, or indeed a PHP example using the PDO library (which incidentally needs to be recompiled onto the PHP system, because it also requires the oracle clients to be installed on whichever environment is calling the library), then please do let me know, I can happily ignore as many requests as you want to send LOL!

    I hope this experience has helped you along your way with Oracle Linux, Oracle FREE 23c, rman and Ansible … if not … I hope it has just entertained you for the last few minutes of reading.

    #peaceout


    Sources (in no particular order):

    A note about the sources, most of them are oracles somewhat dubious documentation, but there are amongst them, a couple of diamond resources that you would be good to take note of. First of all, kewl.org. The wiki is full of really useful stuff, make sure you check it out. kewl.org is written and maintained by someone with a big brain! Then, then there is the Stack Overflow link, that proved absolutely invaluable in showing the order of things and to coin someone else's catch phrase, "This is the way".
  • Beep Beep – exercising is tough

    Beep Beep – exercising is tough

    It is no shocker that the various lock-downs over the last 4 years have taken their toll on my svelte being, well, that’s an understatement, they have kicked me to the curb and rolled me around in fat like it’s snow … so my snowball self is now huge.

    Much like everyone else I have turned to my phone to help me, but I couldn’t find what I wanted. Whilst it is very cool that there are apps out there that will sort out your dietary intake as well as give you road maps for x, y and z, these are just not what I was looking for. Exercise is common sense after all, little and often like your food and you’re good to rock. I just wanted something to tell me when to switch up what I was doing. To go from push-ups, to sit-up’s to planks .. etc.

    My time is somewhat restricted and I don’t always have a set amount of time to do things, so I wanted a simple app that I could set an exercise time, from 5 min to 30 mins, with intervals from 30s to .. whatever and for it to notify me it is time to do something else. After much searching, nothing, I could spend the 5 – 15 quid on an all singing and all dancing “lifestyle adjuster”, but nothing that would just do what I needed it to do. So here I am, writing about my lack of discovery and telling you all that I wrote a very simple JavaScript app to do exactly that. Again, it’s not epic and of course there are probably millions of ways of doing it differently using state changes etc. But it works.

    I have given it the fun title of “Beep Regime”, because I am, if nowt else, a simple man. Start beeps, interval beeps and an end beep. So I created a web form which captures everything I need … a duration and an interval … very … erm … comprehensive?

    With that done all there is left to do is daisy chain the correct amount of timers together until it is complete. It is also worth noting here that I did zero error checking, because I am not stupid and know the interval has to be smaller than the duration (or the same size).

    Lets start with some PSEUDO code to make sure my brain stays on track, always good to remember what it is you’re actually supposed to be doing after you are distracted answering other peoples questions on stackoverflow or where ever.

    Pseudo Code

    Plaintext
    Start  preload start beep, interval beep and end beep  Number of timers = duration / interval  timer 1 plays start beeps with a longer delay to start  for 1 to number of timers    wait interval and play interval beep  timer n (last timer) plays end beeps  Congratulate the userEnd

    I added a normal clock to the home page of my little form when I was testing timers and intervals in JavaScript, its now a permanent feature, well, why not I say, everyone loves a clock, right? I have also added some other superfluous functions for checking things are what they say they are, always good to do some basic checking.

    It all relies on JavaScript behaving as it always has done, there is nothing to say it wont suddenly stop working in the future once browser technology changes. In its really simplest terms … it creates a timer, and on timeout, it looks to see if another timer is needed. If not, it plays the end beeps, otherwise it plays the interval beeps … rinse and repeat as much as you want.

    Here is the code, but I will also attach a little package to this post if anyone wants to download everything including the beeps.

            var started, ended, curIntervals = 0, sections = 0;        function clearAllIntervals() {            const interval_id = window.setInterval(function(){}, Number.MAX_SAFE_INTEGER);            for (let i = 1; i < interval_id; i++) {              window.clearInterval(i);            }        }        function is_val(value) {            if( (parseInt(value) % 1 === 0 )){                return true;            } else {                return false;            }        }        function iTimer (start, duration, total) {            curIntervals += 1;                        // With milliseconds            var intervalTimer = setInterval (function() {                var curTime = Date.now();                var split = curTime - start;                mm = Math.floor((split/1000/60) << 0);                ss = Math.floor((split/1000) % 60);                ms = Math.floor(split%100);                document.getElementById("countdown").textContent = "Interval " + curIntervals + " " + String(mm).padStart(2,'0') + ":" + String(ss).padStart(2,'0') + ":" + String(ms).padStart(2,'0');                // Section Ding                let ding = new Audio('section-beeps.mp3');                ding.load();                 ding.addEventListener("ended", function() {                    setTimeout(iTimer(Date.now(), duration, total-curIntervals), 5000);                });                    // Completion success                let success = new Audio('success.mp3');                success.load();                if (!(curTime <= (start + duration*60000))) {                    // Clear timer                    clearInterval(intervalTimer);                    if (curIntervals < total) {                        document.getElementById("countdown").textContent = "Don't forget to breathe";                        ding.play();                    } else {                        if (curIntervals > 1) intWord = "intervals";                        else intWord = "interval";                        if (duration > 1) durWord = "minutes";                        else durWord = "minute";                        document.getElementById("countdown").textContent = "Congratulations! You did " + curIntervals + " " + intWord + " of " + duration + " " + durWord;                        success.play();                        curIntervals = 0;                        ended = Date.now();                        setTimeout ( function () {                            window.location.reload();                        }, 6000);                                            }                }            },10);        }        function changeTime() {          var d = new Date();          var hour = d.getHours()          var minute = d.getMinutes()          var second = d.getSeconds()          document.getElementById("clock").textContent = hour + ':' + minute + ':' + second;          setTimeout(changeTime, 1000);        }        changeTime();        let start = new Audio('start-beeps.mp3');        start.load();        function letsgo() {            clearAllIntervals();            intform = document.getElementById("starttraining");            thisinterval = intform.interval.value;            thisduration = intform.duration.value;            curIntervals = 0;             if (is_val(thisinterval) && is_val(thisduration) > 0) {                if (thisinterval <= thisduration) {                    intform.intervalopts.disabled = true;                    total = thisduration / thisinterval;                    document.getElementById("countdown").textContent = "Get Ready!! Here we go";                    start.addEventListener("ended", function() {                        started = Date.now();                        iTimer(Date.now(), thisinterval, total);                    });                    start.play();                } else {                    alert("It may be prudent to have intervals that are shorter than the total duration");                }            } else {                alert("You broke the interval and duration, you probably want to fix that");            }        }

    As I said, not the most sophisticated of solutions, but you will see it is concise and requires no additional libraries for it to work, so it’s footprint is pretty small and it does what it says on the tin. I did look at the moment library but it was way more than I needed (seems to be the defacto position for this concept). I have linked to it here just in case you wanted to have a squint in your own time … get it? my lord, I even astound myself sometimes. (If the downloaded code has moment in the changeTime function, you will need to change it to the code in this post, I can’t remember which/when I compressed the folder, it was a while ago now)

    You can see it working over on incredulous.org/br/ should you manage to break something from the download 😉

    Enjoy your exercising and as always, if you enhance or make more of this than I ever would, please let me know and stick a link to your version in the comments or something. It always nice to know things are being used out in the wild so to speak.

    Download “Beep Regime files” br.zip – Downloaded 760 times – 650.72 KB
  • Testing .. 1 . 2 .. erm .. 3 .. Gutenberg Blocks

    Testing .. 1 . 2 .. erm .. 3 .. Gutenberg Blocks

    To Gutenberg blocks and beyond! Well, that was an interesting 48 hours! I was sitting here writing something for someone else (words, lots and lots of words) and while I was looking at it, I thought it would be nice to have more than one column of text to read, like a newspaper or magazine; flowing columns. Of course, a simple enough ask if you’re creating your own HTML/CSS, the ‘column-count’ and ‘column-gap’ styling options work a treat. However, when you’re writing in WordPress paragraphs, there are no such thing.

    It seems since the update to 6.3 and the update to the 2023 theme, this is now broken. Please be aware of that, I haven’t had a chance to look at why. If you wanted to have a look and let me know that would save me the effort! Thanks.

    Dave
    (more…)
  • Creating a LXD Backup Server

    Creating a LXD Backup Server

    Finding a simple enough tutorial for doing this proved a little disappointing, obviously everyone utilising containers are clearly experts in their field and know everything there is to know about the various functions of LXC/LXD, export, snapshot, copy, etc .. So much so, nothing cohesive could be found.

    Because of that I thought I would set myself the problem of creating a container backup server, somewhere other than where the containers are in active use and pausing them would be disadvantageous. The server will be added to the LXC container server as a remote server and images will be copied using LXC COPY (lxc-copy). So worst case scenario, the containers are in two places and can be copied back and forth with relative ease and speed.

    All images created by Dave Wise © 2022 . all rights reserved

    I have decided to discuss everything I run into along the way including the stuff I do to solve any particular problems along the way.

    Creating a Container Backup server

    For the backup server I will be starting from a blank slate installation of Ubuntu 22.04 (hopefully if the release date isn’t pushed back some more), no real reason beyond the fact that I fancied having a look at the new version. Yes, yes, very sad I know … you would have thought after 30+ years of continually installing/reinstalling operating systems for fun I would be well over it by now.

    The Problem and Solution:

    A single server operating LXC containers for different isolated functions with zero crossover between them, they are single instances and if they break there is no recovery for them. For the sake of this exercise we will call this server STUPID.

    So we will add another container server and copy all of the containers to that server, we do not need to worry about connecting to any of the services running on those containers as this is purely a one-way backup process rather than having a fail-over. It helps to have a problem to solve, “Why?” I hear you ask. Well that way whoever is reading this nonsense it is easier for them to relate to it and thus, the solution.

    For the BACKUP server I have started with 20.04.1 LTS as a clean install with LXC/SNAP 4 enabled. The only system software installed is the absolute minimum and sshd to be able to work the server itself.

    Of course if you google (sorry! search using whichever search engine you feel comfortable with, bing, yahoo, whatever) for backing up containers you will get all of the command line specifics but none of the context around it, such as securing the backups, making sure there is very limited access to the backups, securing the underlying OS and so on. Again, this is why this exists.

    So here we are, staring at our nice shiny new Ubuntu server, we told it to use the LXD 4 Snap during install, so we have LXC and LXD installed. Now we need to initialize it to our desired settings.

    dave@shiny:~# lxd init

    Where I select the default, I won’t bother explaining, if you want to know more about the options feel free to once again use your favourite search engine!

    Would you like to use LXD clustering? (yes/no) [default=no]:
    Do you want to configure a new storage pool? (yes/no) [default=yes]:
    Name of the new storage pool [default=default]:
    Name of the storage backend to use (zfs, ceph, btrfs, dir, lvm) [default=zfs]: lvm

    I used lvm here, it doesn’t really matter for the limited use I am going to put it through, the default is zfs and that would be perfectly fine.

    Create a new LVM pool? (yes/no) [default=yes]:
    Would you like to use an existing empty block device (e.g. a disk or partition)? (yes/no) [default=no]:
    Size in GB of the new loop device (1GB minimum) [default=30GB]:
    Would you like to connect to a MAAS server? (yes/no) [default=no]:
    Would you like to create a new local network bridge? (yes/no) [default=yes]:
    What should the new bridge be called? [default=lxdbr0]:
    What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
    What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
    >Would you like the LXD server to be available over the network? (yes/no) [default=no]: yes>Address to bind LXD to (not including port) [default=all]: 80.168.84.221>Port to bind LXD to [default=8443]: XXXX>Trust password for new clients: xxxxxxxxxxxxxxx>Again: xxxxxxxxxxxxxxx

    Right, this is the important bit, we need the new backup server to be available to STUPID so we can finally back up those containers on there to somewhere else, at least that reduces the risk for the containers a little … we can extend this by creating exports from the backup server (that way the load isn’t service effecting) and then upload those images securely to the cloud somewhere for extra safe keeping … even download them, put them on a disk and put the disk into fireproof/emp protected safe. Again, that’s getting a little ahead of ourselves, lets get back to making sure the LXD server is available for backing up to.

    We do want the server to be available over the network, so change the default from no, to YES. Then we supply the IP address for the server we wish to bind to and the port we wish to bind do. I change the port from default, just for the sake of it. I know the port will be firewall protected, but still, better to be safe than sorry, so a little protection through obscuration never hurts.

    Make sure you use a strong password here, if in doubt, there are a multitude of strong password generators available online for you to use. If you’re feeling lazy, here’s a quick JavaScript one you can use to generate 16 character strong passwords and link here. There are no symbols in the passwords generated, they irritate me as you can’t just double click them to highlight them for C&P’ing.

    Password Generator by Dave Wise

     

     

    © Dave Wise 2022 . use and abuse . feel free to credit


    JavaScript
    <script>const keys = {  upperCase: "ABCDEFGHIJKLMNOPQRSTUVWXYZ",  lowerCase: "abcdefghijklmnopqrstuvwxyz",  number: "0123456789"};const getKey = [  function upperCase() {    return keys.upperCase[Math.floor(Math.random() * keys.upperCase.length)];  },  function lowerCase() {    return keys.lowerCase[Math.floor(Math.random() * keys.lowerCase.length)];  },  function number() {    return keys.number[Math.floor(Math.random() * keys.number.length)];}];function createPassword() {  const passwordBox = document.getElementById("passwordBox");  const length = document.getElementById("length");  let password = "";  while (length.value > password.length) {    let keyToAdd = getKey[Math.floor(Math.random() * getKey.length)];    password += keyToAdd();  }  passwordBox.innerHTML = password;}function copyPassword() {  const textarea = document.createElement('textarea');  const password = document.getElementById("passwordBox").innerText;  if (!password) { return; }  textarea.value = password;  document.body.appendChild(textarea);  textarea.select();  document.execCommand('copy');  textarea.remove();  alert('Password copied to clipboard');}</script><div class="controls"><div class="password">  <p id="passwordBox"></p>  <span onclick="copyPassword()" class="dashicons dashicons-clipboard"></span></div><div>  <label for="length">Length</label>  <input type="number" id="length" min="6" max="32" value="16"></div><button onclick="createPassword()">Get Random Password</button><span class="copyright">© Dave Wise 2022 . use and abuse . feel free to credit</span></div>
    Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
    Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:

    And that’s pretty much all there is to setting up LXD to be a remote server, with all the cleverness around it all I suspect you thought there was more to it. Alas, it really is that simple. Although we do need to restrict access to it beyond the normal password protection, I for example, only want the server that it’s backing up to be able to see it. So all other traffic will be dropped.

    I don’t think I need to set up a storage pool on the new server, again, I hear you calling me names … however, the backup server DOES NOT RUN THE CONTAINERS, it merely stores them as backups. Storage pools are only needed for running containers. I am prepared to be proven wrong, so lets see what happens by the time I get to the end of this exercise.

    For the firewall I will just use IPTables … the reality is I could get away with just hosts.allow and hosts.deny here as nothing fancy is needed … in fact …

    So that was even quicker, no need to explain any complicated rules to anyone … only that we have two files to edit, /etc/hosts.allow and /etc/hosts.deny. Obviously you don’t want to screw this up as getting around it will be pretty tricky. Test it before you log out of the server.

    dave@shiny:~/# vi /etc/hosts.deny
    # Deny absolutely everythingALL: ALL

    Be selective, we have still got to do admin/housekeeping on the SERVER, no matter how much you ignore it!

    dave@shiny:~/# vi /etc/hosts.allow
    sshd: a.b.c.d/255.255.255.xxx# Allow everything to/from the server we are backing upALL: a.b.c.d

    Now the new server is all protected and ready to accept images from elsewhere, lets return to the original server, STUPID to make the additions and changes we need on there.

    We ran into a bit of a bump here, STUPID was running a very, very old version of LXD … as such there were some risks taken doing upgrades (after taking copies of everything of course, slowly and manually) to bring it up to a version that could be used with SHINY.

    I would love to say it was painless, well to be fair, it was, but definitely arduous … this was something I wanted to avoid, but alas unless I wanted to change the model this whole exercise was based on (ie. tar and compress snapshots and then rsync them onto another server), probably infinitely easier to be honest, but we are where we are. So, without further ado I just created a complete duplicate of the server and disabled all upload services; I am not concerned about logs particularly as they are contained elsewhere. Now I have a replica server to work on to make sure the changes work.

    For each LXC container found in /var/lib/lxc we had to create a tarball of the rootfs, create a metadata file, and then image import them into the new lxc system (after installing it via snapd obviously).

    dave@stupid:/var/lib/lxc/container/rootfs/# tar zcf /tmp/og-containers/container.tar.gz *
    dave@stupid:~/# date +%s
    1660235845
    dave@stupid:~/# vi /tmp/og-containers/metadata.yaml
    architecture: "amd64"
    creation_date: 1660235845
    properties:
    architecture: "amd64"
    description: "CONTAINER (20220101)"
    os: "ubuntu"
    release: "18.04"
    dave@stupid:~/# tar zcf ~/metadata.tar.gz metadata.yaml

    I only needed one metadata file as all of the containers were based on the same OS, so that saved some typing at least. There were only 11 containers, so I didn’t bother writing a script to do it either. Hopefully this wont be an experience I will need to repeat any time soon.

    Now to import the images into the new install of LXD/LXC

    dave@stupid:~/# lxc image import /tmp/og-containers/container.tar.gz ~/metadata.tar.gz --alias og-container-name
    dave@stupid:~/# lxc image list 
    +--------------+--------------+--------+-------------+--------------+-----------+----------+------------------------------+| ALIAS        | FINGERPRINT  | PUBLIC | DESCRIPTION | ARCHITECTURE | TYPE      | SIZE     | UPLOAD DATE                  |+--------------+--------------+--------+-------------+--------------+-----------+----------+------------------------------+| og-container | F25fc29a412b | no     |             | x86_64       | CONTAINER | 402.87MB | Aug 01, 2022 at 0:01am (UTC) | +--------------+--------------+--------+-------------+--------------+-----------+----------+------------------------------+

    And “Tada“, now to rinse and repeat for all of the other containers and get them into the new image list. Remember, I only need one metadata file as all containers are the same OS, so this can now be replicated for all of the containers.

    On a side note, there have been a couple of days between then and now because there is an issue with the old version of systemd for Debian 8 (jessie) which a number of these containers were, so as well as having to move, they have also had to be completely upgraded. So took the opportunity to make them all Bullseye containers. A real ball-ache to be fair, all I could hear whilst doing the work was “If it’s not broken, don’t fix it” … Oh how nice it would have been not to have to. But we now have a fully autonomous LXD 4 container server running 24 containers that are all nice and shiny and new. Now, when time permits I will get back to creating that actual thing I started this for, a backup system on a different server to keep the images and snapshots safe from terminal illness.

    Right, now back to it … I have added a password to the LXD backup server

    lxc config set core.trust_password xxxxxxxxxxxxxxxxxxxxxxxxxxxx

    Now, with that done, on the nice shiny new LXD server with all of the containers that need to have backups, I have to add the remove server.

    dave@stupid:~# lxc remote add backup xxx.xxx.xxx.xxx:xxxxCertificate fingerprint: e79c64e9dac5ab8a41d1d14af3a154d7bfb44d8a14283907a64a49b2158d9a14ok (y/n)? yAdmin password for backup:Client certificate now trusted by server: backup

    Of course, you will need the trust_password you just assigned to your backup server.

    dave@stupid:~# lxc remote list+-----------------+------------------------------------------+---------------+-------------+--------+--------+| NAME            | URL                                      | PROTOCOL      | AUTH TYPE   | PUBLIC | STATIC |+-----------------+------------------------------------------+---------------+-------------+--------+--------+| images          | https://stupid.domain.com                | simplestreams | none        | YES    | NO     |+-----------------+------------------------------------------+---------------+-------------+--------+--------+| backup          | https://shiny.domain.com:7890            | lxd           | tls         | NO     | NO     |+-----------------+------------------------------------------+---------------+-------------+--------+--------+| local (current) | unix://                                  | lxd           | file access | NO     | YES    |+-----------------+------------------------------------------+---------------+-------------+--------+--------+| ubuntu          | https://cloud-images.ubuntu.com/releases | simplestreams | none        | YES    | YES    |+-----------------+------------------------------------------+---------------+-------------+--------+--------+| ubuntu-daily    | https://cloud-images.ubuntu.com/daily    | simplestreams | none        | YES    | YES    |+-----------------+------------------------------------------+---------------+-------------+--------+--------+

    I wish the next bit was just simple, bit alas, copying live (running) containers can cause other complications such as freezing or even data corruption, it’s not entirely clear why this happens and it absolutely isn’t consistent, it still happens and as I won’t be doing any of this stuff manually, it’s time to do things slightly differently.

    So I need a script that will give me all of the running instances, it’s quite straight forward really, the hardest part remembering the regex to do the trimming … it doesn’t matter how many times I do it, I still have to look up the order of things. To be fair, I create shell scripts maybe once every 3 or 4 months, it’s hardly as if I am living the shell scripting dream.

    Given it’s pretty straight forward there is no need to employ anything complicated to get the job done, LXC allows for json output as well as the default/normal stdout ascii. To take advantage of json without employing a more comprehensive language and utilising just a bash script, we will need the additional package “jq” to be installed. Ubuntu and Debian can install it via APT, otherwise it can be found here (https://stedolan.github.io/jq/).

    # Old Skoollist = `lxc list | grep -i container | awk 'BEGIN {FS ="|"}; {gsub(/^[ \t]+|[ \t]+$/, "", $2); print $2}'`

    jq isn’t a standard bash tool, so you WILL have to install jq if it isn’t installed

    # New Skoollist = `lxc list --format-json | jq -r '.[].name'`

    Now we need to check if the backup exists, we only want to keep two snap backups, I am not interested in anything overly complicated, we can run monthly exports off of the second lxd server to drump tarballs to a network storage device if that concerned. But right now, I don’t care .. I just need to make sure if the backup exists, its renamed to backup.bak .. if backup.bak exists, it is deleted. Then the backup can be done.

    Shell Script Pseudo Code
    Plaintext
    1. Get list of active containers2. with each container  2.1 create snapshot to backup  2.2 backup snapshot    2.2.1 if destination backup.bak exists delete backup.bak    2.2.2 if destination backup exists move to backup.bak    2.2.3 copy snapshot to destination backup  2.3 delete snapshot

    I will put the actual shell script here for everyone’s abuse … feel free to use/abuse as required. It would be nice to know if you used it or any part there of. Or even if it just helped you structured your thinking to solve your problems your way.

    Shell Script

    ShellScript
    #!/bin/bash# +--------------------------------------------# | Create remote server backups for LXD Server# +--------------#  -- Author: Dave Wise 2022REMOTE="iprism"if [ "$1" == "-h" ] || [ "$1" == "--help" ]then  printf "bakbak.sh © Dave Wise 2022 . all rights reserved\n"  printf "________________________________________________\n"  printf "Usage: bakbak.sh\n\n"  printf "Description: A tool for backup up live containers to a remote LXD Server\n\n"  exit 0fiif ! jq_loc="$(type -p "jq")" || [[ -z "$jq_loc" ]];then  activecontainers=`lxc list | grep -i container | awk 'BEGIN {FS ="|"}; {gsub(/^[ \t]+|[ \t]+$/, "", $2); print $2}'`else  activecontainers=`lxc list --format=json | jq -r '.[].name'`fiif ((${#activecontainers[@]}));thentodaydate=`date`message="BAKBAK - Container Backup Service\n---------------------------------\n\nContainers processed ($todaydate):\n"for acontainer in $activecontainersdo# Get backup names to checktmessage="${message}$acontainer\n"message=$tmessageif ! jq_loc="$(type -p "jq")" || [[ -z "$jq_loc" ]];then  matchingcontainers=`lxc list $REMOTE:$acontainer | grep -i container | awk 'BEGIN {FS ="|"}; {gsub(/^[ \t]+|[ \t]+$/, "", $2); print $2}'`else  matchingcontainers=`lxc list --format=json $REMOTE:$acontainer | jq -r '.[].name'`fibackup=""bakbak=""backbackupname="$acontainer-bak"for container in $matchingcontainersdo  if [ "$container" == "$acontainer" ]  then    backup="found"  fi  if [ "$container" == "$backbackupname" ]  then    bakbak="found"  fidoneif [ "$bakbak" == "found" ]then  # Deleting container-bak  `lxc delete $REMOTE:$backbackupname`fiif [ "$backup" == "found" ]then  # Moving original backup container to second backup container  `lxc move iprism:$acontainer $REMOTE:$backbackupname`fi# Now it's time to do the actual backup# Create Snapshot`lxc snapshot $acontainer snap0`# Back up Snapshot to remote server`lxc copy $acontainer/snap0 $REMOTE:$acontainer --quiet`# Delete temportary Snapshot`lxc delete $acontainer/snap0`# In reality this is all one command line but I broke it down # so you can see what each bit is for ... probably pointless tbh# `lxc snapshot $acontainer snap0; lxc copy $acontainer/snap0 $REMOTE:$acontainer --quiet; lxc delete $acontainer/snap0`doneprintf "${message}"fi

    Final thing to do is make it run once a week, I don’t know about you but I still think of Sunday’s as the beginning of the week, so I will run this script once now and then set it to run at 1am every Sunday.

    0 1 * * 0 /usr/local/bin/bakbak.sh

    Yes, I copied the script to /usr/local/bin … I didn’t think I would need to say that, but someone just pointed out I didn’t say it *sighs*

    Anyway, that’s enough from me, the script is working and I have backup’s on a different server that update weekly which is more than enough for my requirements. But, the principles are sound to change the frequency, if for example you wanted to run the script daily (bare in mind that large containers will take a while to process and will likely cause significant load whilst processing the snapshots). Maybe extend it further and add a third server to receive weekly backups from the daily backups. You do you, hope this helps someone at least a little even if it is just to help you decide what not to do.

    Happy Container’ing #peaceout