Frederik is a photographer, blogger and youtuber living in Denmark in the Copenhagen region. Outdoor photography is the preference, but Frederik can also be found doing flash photography applied to product shoots and stills.
Split toning is simply to apply different colors to the highlights and lows in the image. You are changing the original colors based on the intensity of the light. Subject to the colors chosen, the emotional response to an image pre and post split toning can be very different.
I see many photographers using split toning to get a distinct look to their images so the color profile is consistent throughout their portfolio. I am no master here and my colors are all over the place, but when reading the book by Finn, I could clearly see how strong a tool color grading in general and split toning in particular is. So if you have the energy and the discipline, split toning is a great tool for making your images distinct and different than most of what you find on say Instagram.
I apply split toning when working in Lightroom, and Lightroom even allows you to add 3 levels of toning: high, mids and lows. But in the example above I have just used highs and lows. The colors used are blue for the lows and a red-orange one for the highs.
It is not a coincidence that I have used orange and blue. These two colors sit opposite each other on the color wheel and are thus complementary colors. Complementary colors create the biggest contrast, and as you probably know, contrast draws attention. In addition, complementary colors apparently are pleasing to the eye – I have no idea why, but judging from my own experience it sounds about right.
Some years back when I bought my copy of the Nikon D4, I did consider the Nikon D4s, but decided to go with the D4. I have since then made a few videos over at my channel about my experience with the D4 and over there I have several times got the question: Frederik, why did you go with the Nikon D4 and not the D4S? Clearly, the D4S is a better camera, seems to be the thinking behind the question.
Service
The D4S is a better camera than the Nikon D4, no doubt about it. It is also a younger camera, released in 2014 and produced all the way to 2016, when it was replaced by the D5. And this is probably one of the first differences between the cameras: because the D4S is younger, you can probably still get it serviced by Nikon. There is no official policy from Nikon on this matter, but word on the street is that Nikon will service and offer spare parts to cameras until they are 10 years old. And hence the D4S clearly has a better chance of being serviced today should something happen to it, rather than the D4.
The age is reflected in another difference: the price. At the introduction, the D4S was around 500 USD more expensive than the D4, but the relative difference now between the two is much bigger than that. The D4S price for a used copy is significantly more expensive, and I cannot imagine it is all related to the technical improvements. It has to be because photographers also factor in that if the camera breaks down or needs service, then the D4 is a dead end whereas the D4S still is “live”.
That said, with an expected shutter count around 400.000 and the knowledge that Nikon cameras often go way beyond the expected shutter count, I doubt that I will ever see the end of my D4. But it is of course a risk that I cannot get it serviced or repaired, if need be.
EXPEED
One of the major upgrades from the D4 to the D4S is the processor capacity, and the EXPEED 3 is replaced by the EXPEED 4 – about 1/3rd more computing capacity in the D4S.
I think this is one of the reasons why the D4S spec sheet wise is better when it comes to FPS and a more advanced auto focus system. The increased computing power simply gave the engineers at Nikon more headroom to develop the software in the AF system. And this could be important to you, but it is not important to me, as I am mainly an outdoor photographer. If portraiture or street photography is you line of business, then the improvements in the AF system could be vital for you.
There are other updates like a wider ISO range, a stronger battery and slightly redesigned joysticks for better comfort. But again, I think I’ll be fine without these improvements.
Conclusion
There are other differences between the D4 and the D4S and the intention was not to list them all. If you want to see a full spec compare, it is right here.
The D4 was one of the very best cameras the camera industry could offer approx. 10 years ago, and to me choosing between the D4 and the D4S is a bit like choosing between Bentley and Rolls Royce. Both are amazing!
The point is that the improvements made going from the D4 to the D4S simply was not important to me, and with the (in relative terms) significant price difference between the two cameras, my choice was easy.
But this shoe fits my foot. That does not mean it will fit yours. Your criteria are probably different and hence you will need to make your own assessment when choosing between the D4 and the D4S. But I hope my story here has helped you get a little closer to making the decision that is right for you.
I think the best way to describe color saturation is that a completely desaturated image is a black and white image! So the intensity of the color is the saturation. And a color that is completely desaturated is just a shade of grey.
The more grey you add to a color, the less saturated it is. (This is probably not technically correct, but I find it to be a good pragmatic way to think of it).
The saturation of a color in real life is a given, but you can tweak the saturation of a color in post processing. The above I have cut from the post processing tool Lightroom, where the slider in the middle – in this example – allows you to take the intensity of the red color from grey (all the way to the left) to a very intense red (all the way to the right).
As colors speaks to and invoke our emotions, desaturating an image can make it more subtle and calm. So if you want the structures and textures to play a bigger role in your image, taking the saturation down can change the balance in what elements in your picture that dominates.
You can also use saturation to change the balance between different colors, so if you have a red field of flowers on a green bed of branches and leaves, you may want to desaturate the green color a bit to give room for the red flowers shine (relatively) more.
Banding is when the gradual transition from light to dark is not represented in a smooth and gradual way in the image, but rather abrupt changes from one level to another. It often happens in bands, just like you can see a height curve on a map. So that beautiful setting sun is not so beautiful, as the sky above it is shown as bands of red, orange and yellow!
On this homepage you will find a lot of banding going on, and that is because I have to export the images highly compressed to support fast load time.
So banding is very often caused by compression, i.e. that a JPG image is throwing away too much information as it compresses the file to save space. Being a bit of a photo nerd, I often notice banding then I watch a movie on my 48″ LG TV, and I think the banding also here is caused by compression.
Banding is easy to see when a color is transitioning slowly from dark to light, but it is just as noticeable in black and white (see above).
Resolution
Banding is not because the image has too little resolution, but because the information stored to reproduce each pixel is too little. So a fix can be to make sure the images are stored with 16-bits of information per channel rather than 8-bits, i.e. to shoot in RAW or TIFF formats and make sure the information per channel is 16-bits. However, the issue with banding is often not at the source, but when exporting it to JPGs and my best advice here is to go as light as possible with the compression.
Another fix is to introduce a bit of noise when editing the image, so the noise acts as details added to the image and hence softens the transition along the banding. This can be a good strategy if your image is created with too little information per pixel.
You can get into some very technical discussion around banding and why it happens and some photographers are very frustrated to find banding in their prints that is not on screen etc. I am no expert here and do not have all the answers. My ambition here was just to give some insights to what banding is and a few high level fixes should it come your way.
You will often hear experienced photographers talk about the JPG file format as a bad thing and RAW as the way to go. But I think there is a nuance to this: horses for courses.
It is true that JPGs are more “locked in” in terms of what you can do in post processing. Some have described RAW as the ingredients in a meal, and the JPG as the cooked food, and the comparison is not that bad. The RAW format gives more headroom in recovering details from bright highlights and dark shadows and you can also do much more editing if colors and white balance etc than JPG allows. And the quality of the JPG file is subject to the in camera processing of the image and hence the quality of the software in the camera.
But it comes with a price, and the price is disk space or storage space. RAW files store a lot more information per pixel than a JPG does, and this is why JPG files are so popular on the web where fast load times are a key factor. The resolution of a RAW image and a JPG image is the same, but the amount of information stored per pixel in the RAW format is much more than the JPG. Also, the JPG file will be subject to compression where a lot of information can also be lost.
So to say that RAW is good and JPG is bad is to simplified. Sometimes you just don’t need all the flexibility that a RAW file offers, and if you shoot a lot of images the amount of space saved can be significant. Also, if you plan to use the images as JPGs because they need to be small, shooting in JPG directly saves you the conversion from RAW to JPG in post processing. So you may save both time and storage.
I often shoot JPG when I have a very controlled environment like a studio with flashes in a tethered setup where the image is loaded directly into Lightroom for viewing large scale. Here I can quickly see if the colors and metering is spot on or not and adjust accordingly. Where I need the RAW file flexibility is when more variables are not under my control. Like when shooting in low light or shooting into the sun. Here I prefer the headroom in post processing that RAW files give.
So a softbox is a device that gives soft light, right? Well, yes and no. A soft box is a light modifier, intended to make the size of the light source bigger. And all things equal, when the light source gets bigger relative to the subject, the light gets softer. And then the softbox also helps making the best use of the light by redirecting the light that would otherwise not have hit the subject.
Softbox example
Softboxes can be used for both steady light and flash light, but in the following I will assume that we are talking flashlight.
Bigger softboxes are often made as a umbrella like construction, where a set of wires defines the shape of the box. And just like an umbrella, the softbox can be folded to take up very little space when not in use. In my case the softbox is square, but many other shapes can be found.
Inside the softbox the sides of the softbox is fitted with reflective material to make the best use of the light. On the front of the softbox a white fabric is mounted and this is all lit up when the flash fires.
My softbox from Godox is also fitted with a diffuser fabric in the middle so the flash fires straight into the first layer of fabric than then diffuses the light and makes sure the light is distributed evenly within the softbox to make the final light on the front as even as possible. The ideal softbox gives a evenly lit up surface on the front of the softbox – if your softbox does a bad job here, you will see that the light is stronger in the centre than in the corners.
The rear of the softbox typically has a Bowens interface that allows you to mount the softbox directly to most light systems. In my case I used the supplied holder shown above, that connects with the softbox via the 3 pins in the Bowens interface and to the stand via a locking mechanism. In the centre of the ring above, the flash is mounted and of course it needs to be radio controlled (or in optical slave mode).
Why a softbox
As mentioned the softbox is intended to make the lightsource bigger and hence give more soft light. Relative to shooting through an umbrella, the soft box gives much more direction to the light and the light is also more evenly distributed.
Most pro photographers will tell you that getting the flash off camera will be a much better option that having it on camera, as the options for positioning the flash grows to infinite. But to do so, you either need a cable between the flash and the camera (not recommended – limited reach + cumbersome) or you will need some sort of radio communication between camera and flash.
Many modern flashes like the Godox V860III comes with radio receivers, but you may not have a transmitter to put in the hot shoe of the flash, or you may want to use an older flash as fill light and don’t want to invest in a receiver. What to do?
Optical slave mode
To the rescue comes optical slave mode. Not all flashes have this feature, but many do: in the flash there is a small unit that looks for other flashes firing and when they see one firing, they follow suit. Of course, if you have set it up to do so.
You may ask how this is possible? Well, the time the shutter is open, say 1/100th of a second, is a barn door of time for a flash, so it is plenty of time for one flash to fire, another to see it and fire shortly after, and still stay within the time when the shutter is open. Flashes are unbelievably fast!
Built in flash
If you have a camera with a built in flash like the Nikon D750, you can use the built in flash to trigger the off camera flash. If the camera shoots in TTL mode (the automated flash mode), it will fire a pre-flash to measure the light and immediately after fire the flash for exposing the image. It happens so fast you won’t notice the two flashes, but your slave flash will! Therefore, you need to tell the slave flash to ignore the pre-flash and only fire when the main flash fires.
On my Godox V860III the flash has two slave modes: S1 – will fire every time a flash is seen, S2 – will fire at the second flash and hence ignore the pre-flash. So when I control the built in flash manually, I set the slave flash to S1, and when I shoot TTL, I set the slave flash to S2.
Limitations
Some say that the slave flash needs to have line of sight to the main flash, but I have been able to get the slave flash to fire even without direct line of sight. But you will need to try this out and see what works with your combination of flashes. I will say though that the slave flash will need to see a lot of light in order to be able to react, so direct line of sight is probably the safe way to go.
Another thing to notice is that the strength of the optical slave flash can only be set manually. There is no communication between the camera and the slave flash at all – only a visual signal saying: please fire! This is one of the main limitations of optical slave flashes relative to radio controlled flashes. On that note, if the camera with a built in flash is in TTL mode and the optical slave flash is set to S2, then the camera will not be able to factor in the optical slave when it meters the scene with the pre-flash, so remember to factor in the additional light using flash exposure compensation or set the strength of the optical slave flash low.
Several flashes
If you have a trigger on the camera and a radio controlled (off camera) flash, it is still possible to use an additional flash in optical slave mode. It just reacts to the radio controlled flash. This way you can bring the radio controlled and the optical slave flashes closer to each other and hence be more sure the optical slave flash will fire.
You may have noticed that your eye works a bit differently at night than at daytime. Due to the way your eye is constructed, the colors red, yellow and orange will appear less bright relative to blue and green colors, when perceived in low light.
This can lead to some frustrations as a photographer, as your camera does not follow this logic and simply register the light as is. So what you remember to have seen at night may not be what you find when you open the image in your post processing software! The solution is to color edit the images to make the red, yellow and orange less intense so the image is better aligned to how you remember the scene.
Colors and emotions go hand in hand like horse and carriage from that famous song by Frank Sinatra. And as such it can be used as a tool for your photography and the emotions you want to induce.
Think of a midsummer morning where the sun is just rising, filling the room you are in with warm light and long shadows. What colors do you think of? Probably yellow, orange and red. If I had asked you to think of a frosty windless winters morning, what colors would then spring to mind? Probably more cool blue or white. Filmmakers are exceptionally good at using colors to underline or emphasize a mood using colors – I often notice the color coding they use (and the music of course) to create a certain mood. In dystopian movies like Blade Runner the blue and brown colors are often dominating to underline the unsettling look into the future.
Colors not only induce emotions, but can also be used to create patterns and connect objects that would otherwise seem without relation.
Next step
One way to study the effect of colors using your own reaction as guide, is simply to make both a color and black and white version of an image and see how the different versions work for you. You can also try to alter the colors in post processing and play with saturation, hue and brightness.
The point with this post is not that there is a right and a wrong when it comes to colors. If you learn how to use the colors to achieve a certain effect, then your images will have a much bigger impact. And of all the tools in the photographers toolbox (composition, exposure, etc), color is the strongest of them all.
I often find that some of the most simple or fundamental techniques in photography are also the ones with the biggest impact. So when it comes to filling the frame, there really is not that much more to tell than: fill the frame with your subject.
Filling the frame does not necessarily entail a macro shot, although you often will go very close to the subject to fill the frame. But the point with filling the frame is that the “stage” for the subject is lost and only the subject is left. So a lot of storytelling and the relationship between subject and surroundings is gone when you fill the frame.
You can fill the frame when shooting, but if your camera has sufficient resolution, it is certainly also an option to crop the image in post processing and get the same effect.
In the example above, I chose to frame the image with a lot of context information. I could have gone much closer to the withered leave in the center and gotten a very different expression, focusing more on the withered vs living leaf relative to the rainy day scene surrounding them.
There is no right or wrong here – just different expressions. So it comes down to what story it is you want to tell and the expression you want to come across. Filling the frame is just one of many options for you to compose your image.