Author: Kenneth

ExifTool, moving from cmd line to Powershell

ExifTool, moving from cmd line to Powershell

Previously I highlighted the issue in using Exiftool and getting it to add GPS data to a photo. Now that that issue is resolved (see here) we move on to getting it to work within Powershell.

Getting it to work within Powershell is important for myself as I’m using that to copy photos from my SD card onto my machine much much faster than the Sony software can (see here). So why not reuse this script and as well as copying it across, I can add GPS data and also copy it across.

The Command

exiftool -GPSLatitude*=56.9359839838 -GPSLongitude*=-4.4651045874 DSC00320.JPG  

Above we have a working command. Powershell can call executables in various ways as can be seen here.

Option 1 – The & operator

This failed as the parameters for exiftool require to be more than a single string, which is what the & does. If you look at the link to the MS article I tried all the & options it suggests and nothing was close to working.

Option 2 – Direct call

Direct call requires to add .\ if the item isn’t in the PATH or some other means to grab the correct folder location. I had the exiftool in the same folder as the script so .\ did me just fine.

Cut to the chase…

There is an issue with powershell, and the - cause it issues. See this issue listed here

This meant on my first pass I had to split it into 2 requests to correctly update the GPS.

$combLat = '-GPSLatitude*=' + $
$combLon = '-GPSLongitude*=' + $point.lon

.\exiftool.exe $combLat $f.FullName '-overwrite_original_in_place'
.\exiftool.exe $combLon $f.FullName '-overwrite_original_in_place'

Now with the use of backticks I can do the following

.\exiftool.exe `-GPSLatitude*=$ `-GPSLongitude*=$point.lon $f.FullName -overwrite_original_in_place


That tiny piece of code above (change to backtick) took so long and with a combination of human error and not knowing the order to use a ` or a ‘ or a ” or even a range of them required fair bit of trial and error.

It’s on Github

If you wish to view the full code, grab it, run it, etc then head over to this link.
I’ve seriously cut back this post as so many things I’ve learnt, but the easiest way for you pick up from my learning is to just view the code. A picture may well speak a 1000 words. But a 100 lines of code is so much better than 10,000 words in a blog πŸ˜‰–/GPS_Sony_Photo_data_merger

ExifTool, Powershell, Cmd line – adding GPS

ExifTool, Powershell, Cmd line – adding GPS

So you’ve got some photos, those photos do not have any location data. You have some data and you wish to automate adding that data to your photos.
The concept is simple, yet there are numerous hurdles to overcome.

I’ll highlight some of the issues I hit along my journey so you do not have to. See previous post which is another hurdle I had to overcome.

It works on the command line

First step is to get the Exiftool from here if you wish to follow along. This does the actual adding, you just need to know what and how to call it.

Next up, is that you are going to test it working on a single image and most of the examples I was reading involved the command line. So off I went.

This is the XML node containing the data

<trkpt lat="56.9359839838" lon="-4.4651045874">

So you pull out the latitude and longitude and come up with the following:

exiftool -exif:gpslongitude=-4.4651045874 -exif:gpslatitude=56.9359839838 -GPSAltitude=30.42 DSC00320.JPG

Important to note that I’ve copied the exiftool.exe into the same folder as the image and the above command is run from that same folder. Means you don’t have to worry about paths. Makes it simple for initial start up.

So I run the above, it works – sort of! No errors, I take the image, drop it into either Flickr or a free app that shows the location the photo has been tagged with. Hmmmm, it’s not right. That location is in Scotland, but the map is showing it as somewhere in the middle of the North Sea!

Reason it failed is that GPSLatitude/GPSLongitude needs a reference to it’s place in the world, as in which quadrant it’s in. So that would be N/S/E/W. But I do not have that data. In my case the data is negative, so that means it’s West and not East. But this command only takes the number as if it was positive! It completely ignores that fact I gave it a negative value!

Next option

So exif failed. Googling, I can see that if I use the xmp commands, they can handle negative values, as in it does not require the reference values of N/S/E/W

exiftool -xmp:gpslongitude=-4.4651045874 -xmp:gpslatitude=56.9359839838 -GPSAltitude=30.42 DSC00320.JPG 

Again no errors. I load the file into GeoSetter and it shows me exactly where I expect the photos location to be. Wonderful. It worked then. NOPE!!!!

I load the exact same file into Flickr, and yet again, the same as the first command it places the photo’s location into the North Sea! πŸ˜• Trying this in various ways and same thing each time. Works in GeoSetter, but not in Flickr. What on earth is going on???

Debugging/Comparing tags

Firstly I went to look at the GPS location while inside of GeoSetter and if I tried to change it there, it gave me the following warning.

After setting this in GeoSetter, the file with no changes to GPS location then worked in Flickr!

So now that it worked in both GeoSetter and Flickr I presumed that the reason it failed initially in Flickr was due to some timestamp/date issues. Perhaps Flickr being a bit more professional than a free app, it respected the rules in a stricter fashion. WRONG, the reason it failed was not down to the timestamp! On adding the timestamp, Geosetter also updated the geotags which made it work in Flickr.

Had I known this command at the start which outputs exif tags in groups that would have help hugely in debugging the situation.

exiftool -a -G1 -s E:\CameraAddGPS\TestFolder\before\photos\DSC00939.JPG

Correct command to insert GPS with only lat/lon

exiftool -GPSLatitude*=56.9359839838 -GPSLongitude*=-4.4651045874 -GPSAltitude*=30.42 DSC00320.JPG 

You can see that the parameters include a wildcard and what that allows is to only give the lat/lon and it will then work out the N/E/S/W based on the -ve or +ve values given. πŸ‘Œ

Now that works in both Flickr and Geosetter. Brilliant 😍
Next up is to figure out how to do so from within Powershell. (see my next post, coming soon)

Additional info

Just a FYI, if you run the above command to output the grouped data on the files, the below is what you see. The text on the left if from the data of the file that works in all places (Flickr and Geosetter), and the text on the right is from the file that only worked in Geosetter.

You can see that both have the correct data, but the one on the left (which works in all places) has the data inside the main GPS tagged group, whereas the one on the right has some different Composite data and XMP-exif data which isn’t standard enough for Flickr to pick up.

Powershell – What on earth are you returning?

Powershell – What on earth are you returning?

I’ve finished a ‘minor’ project where I extended my previous Powershell script seen here –

What’s new and what did I solve?

I’ve got a Sony RX10-IV and one of its drawbacks is that is doesn’t have a GPS unit inside it. This is a major bummer as far as I’m concerned. So what to do? I searched, read various guides and yet nothing simple jumped out to solve my issue. Ideally I wanted a device to attach to it’s hot-shoe that when I took photos it would write GPS data to the photo. There is nothing, not a single thing that is suitable for the Sony camera.

Before anyone comments, why not use Sony’s phone software to sync up – it’s a huge pile of πŸ’©πŸ’©πŸ’© Not getting into why, it just is! PlayMemories and Imaging Edge Mobile are both virtual paperweights. Ready to be binned on my phone now.

So I happen to have an eTrex 30 for when I go hillwalking and as soon as you turn it on, it starts recording it’s location and time. This is stored in a *.gpx file. Sorted! All I need to do is to merge the gpx file with my photos. Sounds simple. Actually nope, took me far to long…

Functions that return way more than you expect!

So you can create functions in Powershell – good.

You create the function flow, and in order to make sure it’s working as expected you put in some echo commands. Great so far.

NO, NO, NO – do not use echo commands inside a method that you are going to return something. It completely MESSES it up!

Some code ->

Look at the below screenshot while debugging some code πŸ‘‡

If you look at the above image πŸ‘† you can see that the method getSelectedPoint returns the selected point. But it also has a echo.
Inside the debug tooltip you can see that the returned object is more than the point, it’s also got the echo statement!
WTF Powershell! Never seen a language return more than what’s specified to return.

Now check out the same but without the echo

Look at the same debug tooltip when the echo isn’t there. That returned point is just a point.

WARNING – I don’t know Powershell

Must add this warning, as I’m sure those with experience will call out the reasons for the above and no doubt issues with my code. But as a novice in powershell I’m highlighting the issues I stumbled on to get the end result. This took to long to figure out. HTH.

If you’ve got this far and want to know more then head along to this – About Return This is a MS guide about powershell and what it returns. Essentially if you want a standard programming style return, then you can use a class.

Spread and Exclude

Spread and Exclude

The other day I wanted to log out some values of a large object. But one particular property was massive and not needed. So how could I pass all of the original object minus that one property that was going to get in the way?
As in how to exclude a single property from a object.


It’s a very simple and I think elegant solution. so here it is.

Using the spread operator you can list the specific items to copy first (these will then be excluded) then use the … rest operator to copy the remaining items.

That’s it! It telling your code what to specifically copy first, those will then not be included when you use the rest operator. Brilliant!

const bigObject = {a: 1, b: 2, c: 3, d: 4, e: 5, f: 6};
const { a, ...copiedAndExcluded } = bigObject;
// copiedAndExcluded is now the same as bigObject, expect it doesn't have the property 'a'.

Also in my case the bigObject was a property of something else, which at times may have been undefined. This will cause an error, as you can’t spread undefined.

const parentObject = { a: 1, bigObject = undefined, c: 3, d: 4};
const { a, ...copiedAndExcluded } = parentObject.bigObject || {};
// 'a' and 'copiedAndExcluded' will be undefined

The `|| {}` will make sure that the code doesn’t try to spread the value of undefined (as that will throw an error). You can however spread an empty object. The values that come out will be undefined, which in my case was just fine.

Copying files using Powershell (SD card to PC)

Copying files using Powershell (SD card to PC)

I’ve got myself a new camera – Yeah 😎
It doesn’t come with any software to copy images etc over. Boo…
It does say you can manually copy from SD to your machine, but frankly that sucks. I expect software to create things like a nice folder structure automatically. I’m not going to copy, create folder(s), paste multiple times each time I take some photos.

It’s a Sony RX10 IV, so it’s not a cheap camera, and after a hunt online there are offerings that Sony say to use, as in the Capture One software. I grabbed that, gave it a go. Had just over a 1000 images on the SD card that I’d taken (the card has a transfer speed of 250 MB/s) so it’s no slouch!

Fire up Capture One (which has a very annoying register interface) and started to transfer the images. All seemed good.


After a short time the Capture One software was reporting it would take another 3 hours and 20 minutes to transfer the photos!
Over 3 hours!
What on earth was it doing?
So I canceled and closed it. Reverted back to my Nikon software (used for older camera), after all, all I wished to do was to copy the photos from one location to another, and at the same time get the tool to create a reasonable folder structure at the same time.

Guess how long that took? I didn’t time it, but it was less than a minute, I’d approximate around 30 seconds.

So the issue wasn’t the card or my computer – it was Capture One.


Looked online for other Sony recommendations, but all I could find was others complaining about the Capture One software – from years ago. Could I use the Nikon software? No I couldn’t, it worked fine for the jpg’s, but it was the Sony RAW files that caused an issue. They have a different file extension than what Nikon use. So their tool doesn’t see those files.

Thought, I could write a wee script to change the file extension of the Sony RAW files to the same type as Nikon. That would be simple.
Then I’m like why don’t I just write a bat file to copy across and create the desired folder structure.

Powershell – not bat

Firstly task is the script wants to look at each file, see when it was created. Use that data to then create a folder structure. That, isn’t actually a simple task with bat files, and the recommendation is to use Powershell. πŸ™‚

First powershell script

So I’ve never created a powershell script before, but as you’ll see in the following code it’s dead easy and very simple to work with.

# SD Card location
$inputDir = "G:\"
# Where to copy file to
$outputDir = "D:\myPhotos\"
# get all files inside all folders and sub folders
$files = Get-ChildItem $inputDir -file -Recurse

# Create reusable func for changing dir based on type
function copyFileOfType($file, $type) {
    # find when it was created
    $dateCreated = $file.CreationTime
    # Build up a path to where the file should be copied to (e.g. 1_2_Jan) use numbers for ordering and inc month name to make reading easier.
    $folderName = $outputDir + $dateCreated.Year + "\" + $dateCreated.Month + "_" + $dateCreated.Day + "_" + (Get-Culture).DateTimeFormat.GetAbbreviatedMonthName($dateCreated.Month) + "\" + $type + "\"
    # Check if the folder exists, if it doesn't create it
    if (-not (Test-Path $folderName)) { 
        new-item $folderName -itemtype directory        
    # build up the full path inc filename
    $filePath = $folderName + $fileName
    # If it's not already copied, copy it
    if (-not (Test-Path $filePath)) { 
        Copy-Item $file.FullName -Destination $filePath        

foreach ($f in $files) { 
    # get the files name  
    $fileName = $f.Name
    if ( [IO.Path]::GetExtension($fileName) -eq '.jpg' ) {
        copyFileOfType -file $f -type "photos"
    elseif ( [IO.Path]::GetExtension($fileName) -eq '.arw') {
        copyFileOfType -file $f -type "raw"
    elseif ( [IO.Path]::GetExtension($fileName) -eq '.mp4') {
        copyFileOfType -file $f -type "movies"
    else {
        #Do nothing

How to run?

Three steps are needed.

  • Copy the above code and paste it into a ps1 file. So give it a name such as transferFiles.ps1 (really doesn’t matter)
  • On the first instance, open Powershell as Admin
    • run this Set-ExecutionPolicy RemoteSigned
    • this lets you run you script locally, if you don’t do this and try to run any powershell script that you create, it will give an error.
  • Run the script. Either right click and ‘run with Powershell’ or call it from the Powershell cmd.

Explanation – change it?

I’ve included a load of comments, so hopefully it should be clear. Essentially if you want to use it yourself, you will wish to change the

  • $inputDir To wherever you original images are.
  • $outputDir Where you’re existing photo root folder is.
  • $folderName is what I’ve used to define the created folder structure. Feel free to change to your personalisation.
  • The RAW file type extension of Sony is arw. Your type may be different. I split the different file types into different folders. That’s just my preference. Do what you wish.


The above shows what the output is. Simple folder with year, then combined month and day and the named month. The reason I do this is for ordering via the name. If the date was first then you’d get the 1st of each month together, so the whole list wouldn’t be in chronological order when in alphabetical order.


  • I could make this happen automatically when the card is inserted into my machine.
  • I could delete the files once it’s copied (and added verification that they’d been copied!).
  • Could add in duplicate coping to my back up NAS drive

Sure with this being my first go at a powershell script there are ‘wrong’ items, but hopefully it will give someone an idea of what to do. Hopefully it works at speed on 1000’s of files as well! I’ve only tested it on a few files right now. Gives me an excuse to go out and fill the camera SD card with a load of photos 🀣

Cycles of Life – A Cautionary Tale!

Cycles of Life – A Cautionary Tale!

β€œThose who do not learn from history are doomed to repeat it.” (George Santayana)

Having been around the software world for more than a few years you see the same cycles time and again. That rather old phrase of “those who do not learn from history are doomed to repeat it.” comes to mind.
Teams grow and shrink and all going well grow again, prepare for it, so you can mange it.

More appropriately this should be called –
Cycles of Life a Software Team – A Cautionary Tale!

Software and the teams around them are on the whole very versatile and nothing is really doomed. It will probably just slow (hugely) you down or cost someone a shed load of cash to sort out πŸ˜›

We’re entering a new year, all going well your team is looking forward to growing, so keep the following in mind!
When you know what to look for then hopefully you can do something about it.

Team growth

  • How big is your overall team?
  • How big are the individual teams within the overall team?
  • How quickly has the team grown?
  • How quickly do you want it to grow?

These questions are key to figure out what will happen. The following has happened on more than one occasion and in completely different teams.

How quickly have you grown?

You are an awesome start up, your idea is a massive hit with the investors, it’s time to grow, grow, grow!

Here’s the thing, the software industry has a HUGE staff churn rate. Even the best companies will struggle to keep staff for a long time. You must be prepared for this.

Your team has grown from 10 to 30, maybe even 50+ in a very short period. The buzz around you encourages others to join. Nothing better than a greenfield project to work on.

Here’s the rub, all those new staff will be very likely to leave in very quick succession after 18-30 months, once the first goes. Sure some will stay on longer, but also some of the core team may also leave. Through no fault in the team or the project, but just to go experience something else. This is completely normal in software teams. Especially once a senior long standing member goes – look and plan immediately for other to leave.
There is virtually nothing you can do to stop the flow, just accept it will happen and deal with it.

So what!

I’ve hired before, I can hire again. It’s all fine. It’s just a coupe of people.
If it’s not already clear, the short life span for a developer in one place, means if you hire lots in quick succession, they will leave in the same fashion. Unless you are still actively growing, you’re in for a bump!

It’s the wave effect

This point in time, you are no longer a start up, you no longer have a greenfield project, you no longer have the ability to just hire another dozen developers.
Also notice and look for a bit of a build up, like a wave really. Their is a feeling of general minor grumblings. Nothing major, but then once someone goes, curiosity and a question of ‘is it greener over there’ without something to cancel out these thoughts means that inertia to leaving is broken.

Managers, why?

I have to admit failing to understand this point, but senior management never wish to replace a member of staff leaving right away. They seem to like the thoughts of, well it’s only 1 person out of 40. We’ll make do without them.

Sure enough, the team does, it’s only 1 person. But that wave is building up in the rest of the team. Then the 2nd, 3rd go. By which time those leaving encourage others to go, maybe even taking there mates with them.

Know your market!

Depending on how quickly you can hire. Every market is different and you need to know where you are.

  • Maybe your company is in a software hub of a city and hiring isn’t an issue.
  • Maybe your company is in a software hub, but part of a structure that makes hiring a labourious process
  • Maybe your company is in a remote area as far as getting a pool of developers is concerned

Essentially how quickly can you get people back in that door. Are you ready to take staff out of their day jobs, plough through CV’s, then do a fair amount of hand holding to get the new staff up to speed?

Even in the best situation where your company has a low bureaucratic process for hiring, you develop in a good cultural hub/city and you have the finances to pay a potentially larger staff bill (generally a case of the best way to get a raise is to move on). You will have to take team members of active duty for transitioning in new staff.


  • Plan for people leaving – it will happen.
    If you quickly back fill a space, it will reduce the desire of others to look around. Anything fresh is a good thing, even a new face aids.
  • When you know it will take a 3 month minimum period to hire someone, never leave it for 3 or more people to leave. If you do then then even if you start hiring, those seeds of change will have sunk in already.
    Really if 3 people have left and you’ve not started looking, then you’re guaranteed that others will be on the way out (and in quick succession too, esp if they we’re all hired within a similar time frame).
  • Do not ignore unhappy staff – they’ll just leave quicker.
    You may well, and quite possibly completely disagree with your staff members reasons for being unhappy, but it’s often an open market.
    Do you want them to stay or not? If not, plan for them to go. If they stay, great, if not, then you are ready.
  • Ideally always look to the weaker area in the team. Have a range of CV’s coming in and keep the top ones for reference later. If you have potentials ready to roll, then anything you can do to reduce a gap is crucial.
  • Most importantly it can take a number of months to gain enough domain knowledge. The longer you leave it to re-hire folk,this will increase the risk of crucial knowledge being lost, which will likely lead to a major issue that could be solved quickly, but only if the domain knowledge was retained.


Stop focusing on just the technical. You need to be a team, personally as much as just work colleagues. This is SUPER hard, sometimes you want nothing more than to just do the work and go, but…

  • The grass isn’t always greener on the other side! So keeping your staff happy, when they go, they will spread the good knowledge and practices of your team.
    This can even aid in then getting new staff in, but it can also mean that staff can and will come back! Having been in many different companies, the programming world can feel like small circles at times. Just because someone left doesn’t mean they won’t wish to come back.
Array’s – Reducing all duplicates

Array’s – Reducing all duplicates

Years ago when I blogged regularly about Actionscript/Flex some of the most simple and yet popular posts were on array’s.
Today I find myself doing something similar in Javascript – and there is no out of the box solution that I’m aware off.

Take a large array (where duplicates probably exist) and condense it into a collection of unique items.

Let’s say your hitting some API and it returns 1000’s of results, and in that result you want to find out how many unique types of XXXX there are. How would you do that?
One simple way is to use an object and store the items using the key part of an object.

const duplicatedArray = [
  { name: "Bob", hairColour: "Black" },
  { name: "Jane", hairColour: "Grey" },
  { name: "Mark", hairColour: "Red" },
  { name: "Marsalli", hairColour: "Black" },
  { name: "Rachel", hairColour: "Brown" },
  { name: "Craig", hairColour: "Red" },

const getUniqueCollection = (keyFilter) => {
  const reduced = {};
  duplicatedArray.forEach((person) => {
    reduced[person[keyFilter]] = person[keyFilter];
  return reduced;

What’s going on is that it will pass only once through the array with duplication’s, and each time it will write that to the reduced object. If, as there is in this example it comes across two Black hair types then the 2nd one will overwrite the first, therefor removing any duplication.

End result, you now have an Object (some may well call this a dictionary) that on inspection contains unique list of hair colours. This for example could be used to populate a dropdown box or similar. Or you could use the below to iterate over the new object.

for (var key in uniqueCollection) {
    var value = uniqueCollection[key];
    // do something...

Lets say you’ve a more complex object!

const complexDuplicatedArray = [
  { brand: "Ford", colour: "Black", ignoredProperty: "XXX" },
  { brand: "Tesla", colour: "Grey", ignoredProperty: "AAA" },
  { brand: "VW", colour: "Red", ignoredProperty: "222" },
  { brand: "Ford", colour: "Black", ignoredProperty: "111" },
  { brand: "Tesla", colour: "Brown", ignoredProperty: "ZZZ" },
  { brand: "VW", colour: "Red", ignoredProperty: "YYY" },

// pass in ["brand", "colour"] as the parameter keyFilter
const getComplexUniqueCollection = (keyFilter) => {
  const reduced = {};
  complexDuplicatedArray.forEach((car) => {
    const keyValue = JSON.stringify(car, keyFilter);
    reduced[keyValue] = keyValue;
  return reduced;

So here we are filtering on 2 keys, the brand and the colour. Convert the object to a string and drop the properties that we do not care about.


The stringify part in the above is key. What’s it’s doing is taking an array of strings to include in the stringify process.
So using
myObj = { brand: "Ford", colour: "Black", ignoredProperty: "XXX" }
JSON.stringify( myObj, [‘brand’, ‘colour’]); will output
This is ideal for using as the key to store, so if there is another Ford car that is Black with a different property that you wish to ignore, then it’s not included.

Overriding template literal strings

Overriding template literal strings

Have you been using template literals for a while? Most likely, but did you know that you can override how it constructs the overall string? I didn’t until recently.

If you haven’t seen it, then I think you’ll be sure to think that this code is really cool and so much potential use in other areas.

It’s called tagged templates. It means that rather than calling your literal with the standard joining method, you pass your arguments to your own method.
See the below – as ever code is the best way to explain.

const person1 = "Mike";
const person2 = "Bob";

const taggedTemplateStringLiteral = myMethod`you can say hello to ${person1} and ${person2} :)`;

function myMethod(baseMessages, ...params) {
  // The reduce() method executes a reducer function (that you provide) on each element of the array, resulting in single output value.
  // array.reduce(callback( accumulator, currentValue[, index[, array]] )[, initialValue])
  return baseMessages.reduce((acc, cur, i) => {
    acc += `${params[i - 1]?.toUpperCase() ?? ""} ${cur}`;
    return acc;

  /*   Adding in the below as this is perhaps a more common approach,
       but as show in previous post you can improve undefined checks
       with nullish coalescing as shown in the above snippet.

    return baseMessages.reduce((acc, cur, i) => {
      acc += `${params[i - 1] ? params[i - 1].toUpperCase() : ""} ${cur}`;
      return acc;

What’s going on?

Firstly you have you grave character `. This make it into a template literal. When a string is enclosed with those then each portion (split up by using ${ } for code/var segments) is then included into an array.
Next each parameter, whatever is evaluated for each ${ } block is passed as a separate parameter. You could call them individually or in my case I used the rest (…) operator to grab them all.

myMethod`you can say hello to ${person1} and ${person2} :)`

The means to invoke the method is what caught my eye, in that you do not call it like an actual method! The above ends up being the same as –

myMethod(["you can say hello to ", " and ", " :)"], "Mike", "Bob" );

Which is rather smart and a great little snippet to remember.

?? Nullish coalescing & Optional chaining ?.

?? Nullish coalescing & Optional chaining ?.

Javascript Sucks! It really really does at times. Take a look at the difference between the books ‘The Definitive Guide’ and the ‘the best bits’ 😁 I joke, of course, but some portions of it are just annoying.

It’s loose and simple way, is it’s strength and it’s weakness, but with each new feature from the latest ES++ range comes new means to clean up your code and hopefully stop shooting your feet!
The below new operators come from ES2020, the 11th edition.

Operators with ? in them can be a bit confusing if you’ve never seen them and also they are hard to search for. How do you search for ? if you don’t know the operator name? So this is here for my own bookmarking as well as a reference for others. First up we have the ??

?? is the Nullish Coalescing operator

What’s its purpose, to make getting expected true/false values back where it would seem normal to do so! What do I mean, well normally if you want something to be false you’d make it false. Or you’d expect null/undefined to be false.

But under normal circumstances, JS isn’t like that. I’m not going to go into the falsy/truthiness of JS, but to suffice to say that the ?? allows for default values such as 0 or an empty string to be valid values, rather than being false.

const textSetToAnEmptyString = "";
const DEAFAULT_TEXT = "This is the default";

const myText = textSetToAnEmptyString ?? DEAFAULT_TEXT;
console.log(myText); // ""

const myTextOldWay = textSetToAnEmptyString || DEAFAULT_TEXT;
console.log(myTextOldWay); // "This is the default" - probably not what you wanted

As can be seen if the text is set to an empty string which is a valid value, with the ?? operator it remains as an empty string, which is exactly what you’d wish.

But if you wished to use the || operator then the empty string will be changed as “” is considered false in JS land.

?. is the Optional chaining operator

Looking into this new feature, it is REALLY great.
Checking if a value exists in some nested object for JS is yet another of those painful items. Not that’s it hard to do, but it’s such bloated code.
So this is simplest to show via code

// set up an empty object
const obj = {};
const foo = obj.child;  // foo will be undefined - No error
const fooError = obj.child.secondChild;  // Error - can't access secondChild from an undefined value
const fooTwo = obj.child?.secondChild; // fooTwo will be undefined - no error
const fooThree = obj.child?.secondChild.thirdChild; // fooThree will be undefined - no error
const fooFour = obj.child?.secondChild.thirdChild.fourthChild; // undefined - no error

What’s going on with fooThree and fooFour???

Well the ?. operator is short circuiting when it goes to assign a value to to fooThree and fooFour.

  • Looks at obj – which is a valid object
  • looks for child, and JS being the way it is doesn’t complain, and returns undefined (as it does not exist)
  • ?. check now means that it will go no further as child was undefined
  • secondChild, thirdChild and fourthChild do not get looked at/processed due to the short circuiting of process

Also works on methods πŸ‘

const myFunc = () => { return "WOOO" };

const foo = {
  methodX : myFunc

const message = foo.methodX(); // "WOOO"
const message2 = foo.methodY(); // Error - methodY does not exist
const message3 = foo.methodY?.(); // undefined and carries on

// The ?. syntax will also work in the same way with arrays
// such as myArray?.[12] πŸ‘

Multiple ?.

In the above child isn’t defined, but what would happen if it was defined?

const obj = {};
obj.child = {}; // Set child to an empty object
const foo = obj.child?.secondChild; // undefined - no error
const fooTwo = obj.child?.secondChild.thirdChild; // Error - can't access thirdChild from an undefined value
const fooThree = obj.child?.secondChild?.thirdChild; // undefined - no error

Here we needed to insert a second ?., this will check that secondChild is set before moving on. So for fooTwo there is an error as thirdChild can’t be accessed from undefined, but fooThree is ok as it doesn’t get to thirdChild due to the short circuiting.


Both the above are powerful tools and even better is that they can be combined!
Let’s say you

let myMap = new Map();
myMap.set("foo", {name: "baz", description: "" });
myMap.set("bar", {name: "baz" });

let descriptionFoo = myMap.get("foo")?.description ?? "description not set"; // output is ""
let descriptionFooOld = myMap.get("foo")?.description || "description not set"; // Output is "description not set" - not what you'd wish for as it has been set
let descriptionBar = myMap.get("bar")?.description ?? "description not set"; // output is "description not set" as it wasn't set

Hopefully the above will give you some ideas of where to use these new operators. I’m sure as I go through some old code I’ll see better examples on where to use them.

Moving from IntelliJ to VS Code

Moving from IntelliJ to VS Code

Change is hard, and it’s always harder if you’ve been using ‘X’ for years.

I’ve been using IntelliJ for a number of years now, and it is great! It really is, and there is no reason other than cost not to use it. If cost is not a concern then use IntelliJ.

Keyboard shortcuts

You’ve spent years in one IDE, your fingers just know where to go, what to do. That instinctive physical memory would be very hard to get over. Any change will immediately make you less productive. You’re just not going to do it!

Thankfully the benefit of the community around VS code comes to hand.

Install this and your fingers need not learn anything new!

Unit tests

Next up is testing. I really like right clicking on a test to run/debug it. The below is a partial solution (right click to debug fails)
This requires two extensions to get it to work.

1 – Test Explorer UI

This gives you the side panel where you can see your tests – but you will see ZERO tests in here to start of with. You need to install one more extension.

2 – Jest Test Explorer

Install the below and you will now see your tests in a panel inside VS Code

You can see the list of test now inside the Test explorer.

As yet, I’m unable to get ‘debug’ to work. I can debug the entire suite of tests via a config command, but not a single test via the context menu.
This is really annoying… I’ll come back to this if/when I figure it out.


Next up is coverage mappings. You’ve run your tests, added the coverage flag and you want to see where in the code you are missing coverage.


I found that after I had installed the above, ran my coverage that nothing was being highlighted. After messing around with configuration for ages, restarting IDE etc and getting nowhere I disabled (not uninstalled) the previous extensions ( Jest Test Explorer and Test Explorer UI). After another restart of VS code, hey presto it worked!!!
Re-enabled the Test extensions and now all the extensions where working.

Not as pretty as IntelliJ, but it’s better than nothing. The colours can be customised, so this could be made less jarring on the eyes.

Code formatting – Prettier

Looking for auto format to a standard format that is widely accepted? Prettier is the way forward for many reasons. Install the above followed by a couple of config changes and your code will be beautiful.

Go to your settings, filter by ‘save’ then update to settings shown here.
“Format on Save” – True
“Auto Save” – onFocusChange


Lastly, this is a bundle of other extensions that enable things like autocomplete of filenames, autocomplete of imports. This type of hints and flow should be automatic and is there from the outset in IntelliJ. So getting the same here is a must.

Close enough…

So now that I’ve installed all of these extensions I’m really happy with VS Code. Sure it’s not as good as IntelliJ – but VS Code is FREE.
So if you want to mess around at home for some personal projects, this is brilliant.