Vehicles Song with The Kids’ Picture Show | Cars, Trains, Planes and More

Vehicles Song with The Kids’ Picture Show | Cars, Trains, Planes and More


Wow, I see an airplane! Airplane, airplane! Bicycle, bicycle! Fire engine, fire engine! And train, train! Helicopter, helicopter! Bulldozer, bulldozer! Amublance, ambulance! And car, car! Airplane, airplane! Bicycle, bicycle! Fire engine, fire engine! And train, train! Helicopter, helicopter! Bulldozer, bulldozer! Amublance, ambulance! And car, car!

What Are The Most Important Science Images Ever?


Welcome to It’s Okay To Be Smart. Today we’re
gonna look at the Big Picture. [music] So I’ve been preparing for a couple big science
conferences recently and I’ve been thinking a lot about the importance of images to communicating
science. Whether it’s YouTube, Instagram, Tumblr, so many of the ways that we communicate
today highlight images over words. It’s not that I think actual words on paper
are dying off, in fact those same digital tools are giving science writing something
of a rebirth. But the value of images of images as cultural currency is skyrocketing. Of course
this is nothing new to us in Science Land. Throughout the history of science photos and
illustrations have not only captured key moments IN science, but they’ve served as first “shots”
inw hat Thomas Kuhn would call “scientific revolutions” where paradigms are shifted,
theories are realized, new fields of science are born, and minds are generally blown. In that spirit I’ve collected a few of what
I think are the most important images in science history. In 1543 all it took to change the world was
seven circles. This is the so-called Copernican model of the solar system in which Nicolaus
Copernicus permanently demoted Earth from its position at the center of the universe,
in his book De Revolutionibus Orbium Coelestium, which is just really fun to say. Now this was not actually a very popular book,
people did not take to the streets and riot calling for Copernicus’ head. But it did change
the world, no pun intended. In fact, he wasn’t even the first person to think of this idea,
that honor goes to a Greek named Aristarchus nearly 2,000 years earlier, which we have
talked about before. But the real impact of Copernicus’ work was
that it changed very way we look at the universe. Not only was our position in it not special,
it meant that the laws of nature that we observe here would be the same everywhere else in
the universe, and although that sounds simple, that might be the most important scientific
principle that we can take from his work. This is a flea. A flea is very small. You’d
think it would be too small to change the world on its own, but you’d be wrong. This
one did just that, it comes from Robert Hooke’s Micrographia, a collection of illustrations
he put out in 1665 that became the world’s first scientific best-seller. It was hugely
popular. About a half a century after people like Galileo
were turning lenses to the stars to bring them closer, Robert Hooke turned the telescope
around to bring the microscopic world to life. Now this drawing would be a work of art in
its own right, but that intricate detail and the perfect matching of form to function on
this wee beastie, it began to challenge notions of design in nature, and shattered the idea
that humans were the most perfect living form on Earth. That beautiful illustration of a flea inspired
naturalists for the next two centuries to begin to ask WHY these forms, at every scale
of nature from the smallest bug to the largest tree, matched up so well with the needs of
those creatures. One of those scientists was this guy. Worked out pretty well. Einstein’s general theory of relativity was
a revolutionary concept when he introduced it, but scientists had relatively few ways
of actually testing it. One of the consequences of Einstein’s theory said that light should
be bent by gravity as it approaches an object. Now that means for stars behind the edge of
our sun, we could actually see them because their light would be bent around it. Unfortunately
our sun is so bright that we can’t see those stars around the edge, but in 1919 an eclipse
took place that was particularly long and dark. British astronomers Andrew Crommelin
and Arthur Eddington went to South America to capture it on film. With the sun blocked
out they were able to measure the bending of light waves around a massive object for
the very first time, and Einstein’s theory of relativity was proven correct, and he became
the celebrity know and love today. In their 1953 paper describing the double-helical
nature of DNA strands, Rosalind Franklin, with the help of James Watson and Francis
Crick, well, they, they changed everything. This simple sketch showing these two ribbons,
antiparallel and complementary bases in between, it outlined the molecular nature of genetics
and described the universal information carrying molecule for all life on Earth. That’s kind
of a big deal. I think my favorite part of this one is that it looks like it was sketched
on the back of an envelope. Although it was one of the most important scientific findings
of all time and it appeared in one of the most prestigious journals on Earth it was
so simply drawn that anyone could understand it. This is what you get with 23 days of exposure
on the Hubble telescope. You see galaxies one ten-billionth as bright as the limit of
the human eye. In this image we can see galaxies nearly 13.2 billion light years away, that
light has been traveling since nearly the beginning of time itself. Countless planets
and stars might exist inside them, it’s time travel in a photograph. On Christmas Eve 1968 as Apollo 8 came out
from behind the moon, they saw Earth rising above the lunar horizon. This picture’s a
role-reversal of sorts. Instead of this barren white moon rising above us, they saw this
delicate jewel, a blue, living Earth rising before their eyes. When that image hit the
magazines and newsstands and TV screens back on Earth, it changed the way that we view
our living planet. Galen Rowell called this “the most important environmental photograph
ever taken.” So why are pictures so important to science?
Our minds seem to be built for images, vision is our primary sense. Words and numbers are
invented languages that can enhance our communication, but I think that images are a universal language,
something whose meaning and importance we understand from birth. Our minds are also
limited. In his book “Cosmic Imagery” John Barrow says that images allow us to capture
something memorable without it needing to be remembered. The way I look at it, capturing
a moment is just another way of saying “observation” and that’s what science has been built on
from the very beginning. I only picked a handful of my favorite images
from science history, so I know I missed a ton of great ones Why don’t you leave me a
comment with your favorites. Who knows, maybe I’ll feature some of the best over on my Tumblr.
Thanks a lot for watching, and stay curious. Thank you all so much for helping to make
my science of kissing video the second most viewed video I’ve ever made. A few commenters
pointed out that this kissing research only focused on hetero male-female couples, which
is actually something we talked about in the video, but so much psychological research
has this bias. We tried to find more to put in the video, but it simply wasn’t there.
Psychologists, if you’re listening, we need to represent more people out there. So we’ll
keep an eye on that. Thanks to everyone who enjoyed the science
fiction as science fact episode from last week. You’ve already pointed out a bunch of
great science fiction that I missed. I hope you’ve also watched our collaborators’ videos
over at Idea Channel and the main PBS Digital Studios page. Lots of people left comments
saying that some science fiction is so good at predicting actual science because scientists
are people too, and they read science fiction, and they might be inspired to make what they
see into reality. And I
absolutely agree. We actually had a line in the episode exactly about that, but I decided
to pull it out so you’ll just have to take my word for it. It’s one of those unique intersections
of science and art that I think feeds into many parts of our brain and helps us create
things that we wouldn’t be able to do otherwise. We have some really special episodes coming
up over the next couple weeks. I ran a marathon for science, and then we’ve got one that’s
a little bit Dr. Seuss and a little bit chemistry, so make sure you subscribe so you don’t miss
those. Links to the email, twitter, tumblr, everything else down below and be sure to
leave me a comment if there’s something you’d like us to tackle in a future episode.

How To Move Pics With Background Change Using 3D PowerPoint Trix

How To Move Pics With Background Change Using 3D PowerPoint Trix


To Change Subtitles {CC} Captions Of Our PowerPoint Tutorials In French, Spanish, German, Arabic, Afrikaans, Chinese, Japanese & Lot More. Click Settings Button On YouTube Player Bottom Right>Choose Translate Option On Subtitles>Choose Your Language. Hello viewers! Welcome to RK Photomagic Trix. So papa, you have done the 3d pop out to our picture from Mauritius, but can you move it, make it nicely plain or something? Viewers, she wants me to do things using PowerPoint which are not even possible in Photoshop. As you can see behind, there is a picture of us, which has a beautiful background. We have done nothing to it, it is a completely natural picture, no color corrections have been made, nothing at all. This is what Mauritius Grand Bay looks like. Look at the blue color of the sky, the color of the water. It?s so beautiful. Now what Arkshya wants me to do is enhance the picture. How to enhance the picture? Now there is the sky, right? What if I could get a jumbo jet flying out of the sky? I am going to take another picture, as you can see, I have opened another picture, which is of an airplane. Now I will extract the plane out of this picture. How will we extract it? We will crop it. First we will crop it and then remove the background. If you want to know more about remove background, we have shown explanatory details in our other tutorials. So what we have done is we have cropped and removed. We’ve extracted. Extraction is similar to cutting out an airplane from a piece of paper and leaving the rest of it. Now we have extracted the airplane and we have the picture from Mauritius. So what should we do now? We should drop the plane. We should drop the plane and the police will arrest us for that… I don’t know what she is up to. What we have to do is…. We have this plane, we are slowly bringing this plane onto this picture and we will resize the plane. As we can tell from the box around it that the plane is selected, we will resize it from here so that it looks smaller. Now, what we are going to do is even more interesting. It will look like the plane is getting bigger and bigger exactly like in reality. A plane looks bigger as it comes closer. So we will give it an animation. We will right click the picture then select ‘add animation’ then ‘grow shrink’. Look, the size of the plane is gradually increasing, while the rest of the picture is the same. It is only the plane size that is increasing. This isn’t looking very nice since the rest of the picture is the same and only the plane size is changing so what we’ll do is …
and only the plane size is changing. So what we’ll do is … Can you make the clouds move as well? Yes, we can make the clouds move. What we will do is we will make three to four copies of this picture, one over the other. How do we make copies? Ctrl+c and ctrl+v for copy and paste respectively. Alright? So we will do exactly like this. So what we are going to do now is make the whole picture move. We now have five copies of the picture, one over the other and so on. From one of the copies, we will only crop out the sky, okay? We will extract the sky. So Arkshya, whenever you cut your nails, the edges become sharp and to smooth the edges, you then file your nails. Here we are using soft edges, as we use soft edges, it softens the picture and reduces sharpness. We now have clouds, water and ground in separate layers. Now we will also extract ourselves from one of the copies. Now please understand what we have done. If you understand, you have five photocopies. In the first one you have cut your picture, in the second one you have cut your clouds, in the third one you have cut the water, in the fourth one you have cut the ground. So now everything is in separate layers, right? So now if we make everything move, then it will appear as if the whole picture is 3d. And we have also cut ourselves and because we have extracted ourselves, we will make ourselves move as well. Like if the plane is coming closer from the back, then we will also increase or decrease our sizes, we will make the water move. So you see, the animations we are using in this, we have selected ‘add animation’ for everything and given grow shrink to one, right motion to the other and left motion to the other. We are picking and choosing whichever motion will look good on it. And we will make all the animations work simultaneously. We will select ‘with previous’, we can also select ‘after’, see as we are experimenting behind, what will look good. When we finish the animation we press F5, what does F5 do? We can watch our slide show. Alright so… Now I am pressing f5 after completing the animation and now see the magic. It appears as if the plane is coming from behind, the clouds are moving, the water is moving and everything. So this has become a 3d moving picture. Please read my lips, this is made by only using PowerPoint. PowerPoint 2010, PowerPoint 2013. So, people who thought PowerPoint was very boring, you have to eat your words. If you want to see more magic on PowerPoint, just visit our website www.rkphotomagictrix.com and don’t forget to like and share our Facebook page and remember this is a website where we are making PowerPoint do things that people thought were not possible before.

AI Learns 3D Face Reconstruction | Two Minute Papers #198

AI Learns 3D Face Reconstruction | Two Minute Papers #198


Dear Fellow Scholars, this is Two Minute Papers
with Károly Zsolnai-Fehér. Now that facial recognition is becoming more
and more of a hot topic, let’s talk a bit about 3D face reconstruction! This is a problem where we have a 2D input
photograph, or a video of a person, and the goal is to create a piece of 3D geometry from
it. To accomplish this, previous works often required
a combination of proper alignment of the face, multiple photographs and dense correspondences,
which is a fancy name for additional data that identifies the same regions across these
photographs. But this new formulation is the holy grail
of all possible versions of this problem, because it requires nothing else but one 2D
photograph. The weapon of choice for this work was a Convolutional
Neural Network, and the dataset the algorithm was trained on couldn’t be simpler: it was
given a large database of 2D input image and 3D output geometry pairs. This means that the neural network can look
at a lot of these pairs and learn how these input photographs are mapped to 3D geometry. And as you can see, the results are absolutely
insane, especially given the fact that it works for arbitrary face positions and many
different expressions, and even with occlusions. However, this is not your classical Convolutional
Neural Network, because as we mentioned, the input is 2D and the output is 3D. So the question immediately arises: what kind
of data structure should be used for the output? The authors went for a 3D voxel array, which
is essentially a cube in which we build up the face from small, identical Lego pieces. This representation is similar to the terrain
in the game Minecraft, only the resolution of these blocks is finer. The process of guessing how these voxel arrays
should look based the input photograph is referred to in the research community as volumetric
regression. This is what this work is about. And now comes the best part! An online demo is also available where we
can either try some prepared images, or, we can also upload our own. So while I run my own experiments, don’t leave
me out of the good stuff and make sure you post your results in the comments section! The source code is also available for you
fellow tinkerers out there. The limitations of this technique includes
the inability of detecting expressions that are very far away from the ones seen in the
training set, and as you can see in the videos, temporal coherence could also use some help. This means that if we have video input, the
reconstruction has some tiny differences in each frame. Maybe a Recurrent Neural Network, like some
variant of Long Short Term Memory could address this in the near future. However, those are trickier and more resource-intensive
to train properly. Very excited to see how these solutions evolve,
and of course, Two Minute Papers is going to be here for you to talk about some amazing
upcoming works. Thanks for watching and for your generous
support, and I’ll see you next time!

Finding Images

Finding Images


Whatever your discipline, you might find
images to be really useful resources for
research. Paintings, engravings, diagrams, and any
other visual representations of your subject
can help you better understand your topic. UVic subscribes to image databases like
ARTstor, Camio, and Oxford Art Online, which you can find in the Images subject
guide, or under the databases tab. You can find images on the web using an
image search engine like Google Images or
Picsearch, or by searching an image database such
as Wikimedia Commons or Flickr. Most of these resources will allow you to
limit your search to results that you can
use free of copyright. You can also find images in databases
such as Images Canada, the UNESCO Photobank, or the Earth
Science World Image Bank. Many libraries and museums have large
digital image collections, including the
Smithsonian Institute, the New York Public Library, the Library of
Congress, and the Virtual Museum of
Canada. Visit the individual Subject Guides for more
subject-specific image sources. There’s also a specific Subject Guide for
Images, and one for Medical Images. There are different ways to cite images,
according to different citation style guides
like MLA or APA. Consult the appropriate style guide for
more information. For instructions on citing online images,
check out the guide created by SFU
Library. The UVic History in Art Style Guide and the
UVic Department of History Style Guide If the image is a map or remote-sensing
image like a satellite image, include the term [map] after the title, and
the scale if available. You can also watch our video on Finding Maps. For more help, visit the subject guides, watch more videos, or meet with a librarian. Thanks for watching!

What is a dimension? In 3D…and 2D… and 1D


How do we know we live in three dimensions?
Here’s a clue: it’s not just that we have to use three coordinates (like x,y,z, or latitude,
longitude, altitude) to label every point in space – because we don’t! Mathematicians have showed that it’s possible
to fill up 2d or 3d space using a one-dimensional “space-filling” curve – that means that every
point in 3d space can be labelled using just one coordinate: our position along the curve!
(it also means that a square and its side contain the same number of points – crazy,
right?) So how do we know that we live in three-dimensional
space and not on a one-dimensional line curled up so much that it looks three-dimensional?
Well, the short answer is that we don’t know – but we DO know that it looks 3d. So how
do we test that? One way is to look at diffusion of gas – that
is, how a gas spreads out over time. We just measure the ratio between volume and radius
of the gas cloud: In one dimension, radius and volume are the
same! (up to a factor) In 2d, “volume” means area – or radius squared,
and in 3d, “volume” is radius cubed, and so on for higher dimensions… and 3d is what
we see. So basically, determining how many dimensions
we live in is just a bunch of hot air!

Mona Lisa may be First 3D Image! Stereoscopy


So the Mona Lisa might be the first 3D Image! German researchers Claus-Christian Carbon
and Vera Hesslinger studied the famous Leonardo da Vinci portrait alongside a very similar
copy known as the “Prado Mona Lisa” in Spain. They conclude that the pair might be the world’s
first stereoscopic image. But wait! What is a stereoscopic image? How
can two images be 3D image? Stereoscopy is a technique where you create
the illusion of depth by using two similar, but slightly shifted images. Basically, it
mimics what our eyes see. Your left eye and right eye can look at the
same object, the vision is slightly different because of the distance between your pupils.
Your body sends these two flat images to the brain, and your brain smooshes the two images
together, and voila, you have depth perception! Amazing! However, since you’re looking at a flat
image, rather than a 3 dimensional object, tricking your eye into thinking it’s 3 dimensional
is a little trickier. You might see something if you full screen this video, cross your
eyes, and adjust your head closer or farther from the screen. But that might be a bit uncomfortable,
and still not produce the desired result. So thats why we have a stereoscope to comfortably
view the images. The first one was invented by Sir Charles Wheatstone in 1838, but if
da Vinci were to have made one to accompany these two paintings, he would have predated
this by more than 300 years. The Prado version was introduced to the public
in 2012 as a possible work of deVinci or one of his students. It’s thoerized that the
paintings were painted side by side, which would explain the slight difference in perspective. Intrigued, Carbon and Hesslinger decided to
calculate the positions between the painters (or painter) and they concluded that the horizontal
shift between the two paintings were 2.7 inches, which happens to be very close to the average
distance between a person’s eyes. Turns out, among all his other studies, DaVinci
also researched monocular and binocular vision, aspects of optics, eye anatomy, and light
reflections. So it could’ve been that he intentionally tried to create a stereoscopic
art. It’s impossible to know whether Carbon and
Hesslinger’s observations are just a coincidence, or if if was intentionally made to be stereoscopic.
What do you guys think? Do you buy it? Or do you just think it’s a result of how the
two were produced side by side. If you liked this video, please subscribe
and share with you friends. And Thank so much for watching! As always
if you have any comments or questions, let me know down below. Or shoot me a question
on Tumblr, or Tweet me @LittleArtTalks. I’ll see you guys next time. Bye!

The Image Toolset – Part 8 – 3D Selective – Adding Motion Blur – Flame 2020.1

The Image Toolset – Part 8 – 3D Selective – Adding Motion Blur – Flame 2020.1


Hi everyone, Grant for the Flame Learning Channel. In parts 5 and 6 of the Image Toolset series… We looked at the 3D AOV capability… Where you could produce a selective matte for your image… Based on supplied 3D information. So you could create isolation mattes… based on the z-depth of the image… Or the normals of a 3D object. With the Flame 2020.1 update… We have now the ability… To use motion vector data… To generate an isolated selection. So you can identify an object’s movement and direction… And use that as the basis for a 3D Selective. You can feed the isolation matte into any SelectiveFX shader… And be as creative as you like. However the use case, you’re about to perform… will show you a creative yet practical use for the Motion 3D AOV. Simply put, you’ll add more motion blur into a shot… Using the 3D Selective workflow. Now if you are new to 3D Selectives… I suggest watching parts 5 and 6 of the Image Toolset series… To explain the basics and fundamentals of 3D AOVs. For everyone else… if you’d like to follow along with this video… Click the link in the description below… Or type the link displayed to download the media. Now import the downloaded media… And either open it as a sequence… Or edit it into an existing sequence. Looking at this clip in the player… You will see a shot of a guy taking his bike down a ramp. Now there is a little bit of motion blur… But the shutter may have been set high… So the subject is still quite sharp. Adding a bit more motion blur… will enhance the movement further. As a side note, if this was a CGI image… You could be supplied with a Motion Vectors data pass. You could use that with the 3d Selective. However since this is a live action shot… At some point, you’ll have to generate the Motion Vectors pass in Flame. So you’ll use the Image TimelineFX for this example… Since there are no other external inputs. If you were provided with a separate motion vectors pass… I would suggest using either the Image or Action node in Batch or BatchFX. Switch to the Effects Environment… And if you don’t have this layout already… press ALT+2 for the 2-up view. Manager on the left with 8… And result view on the right with F4. Now expand the selective in the Manager. By default, the Selective is using the MasterGrade SelectiveFX shader. This is fine for grading. But to add motion blur… Delete the MasterGrade… And add a SelectiveFX through the context menu. At the bottom of the list… Choose the Motion Blur SelectiveFX. Now this shader’s controls… Won’t do anything without a motion vectors data pass. So if you were supplied with one… You could add it as a Motion Vectors map via the media list. But remember I said earlier… That you would need to do that in Batch or BatchFX… With an Image or Action node. In this case, there is no Motion Vectors Map… So you need to generate it… And this can be done quickly in the timeline… Using the Image TimelineFx. Go to the Selective Controls… And switch to the 3D AOV menu. Change the Type to Motion. Now click CREATE MAP. The Motion Vectors are generated for the shot… And you can verify this in the manager. To view the Motion Vectors… You can select it in the manager… And over the result view… Press F8 for the Object View. Scrubbing the sequence… You can see the motion vectors update per frame. With the Motion Vector Analysis… You can cache on scrubbing… Or click the Cache Range button. You would use these to improve performance if required… And I cover these features in great detail… In the Motion Warp tracking videos for Flame. Select the Selective for its menus… And press F4 to return to the result view. Now go to Frame 25… And turn up the Motion Blur Exposure to 10. So the cyclist has plenty of motion blur… But if you look at the background… The motion blur is affecting other moving people and objects in the shot. You could use Keyers and masks in the Selective… To isolate the Cyclist. But since the data has already generated for motion… You can use that as the 3D selective. So enable ACTIVE to turn on the 3D Selective. Now just looking at the result view… You can’t really tell what the Motion 3D selective is actually affecting. So let’s use another view to see the selective… As well as monitor the result. Press ALT+3 to switch to a 3-up view. Now set the manager to one view with 8… The result view to the middle viewport with F4. For the final viewport… Hover over the third view… And press F9 to switch to the Selective View. Currently you should see the Selective matte output. This is the matte that is being generated by the Motion Vectors. Admittedly, this doesn’t help in this context… When trying to target specific objects for the SelectiveFX shaders. Instead, press F9 over the Selective View again… And this will toggle to the Selective Input. You can now see the image with an overlay… Of where the Motion Blur will be applied. Currently, it’s affecting the people in the background… And not our cyclist. So this is where the 3D AOV Motion controls will be invaluable. In the interface, you will see a widget… Which allows you to define the direction… As well as the speed of the object. So as you move the ball around… The overlay updates… Showing you the direction being chosen for the 3D selective. In the case of the bike… It’s travelling to the bottom left of the frame. So position the widget to point in that direction. Now the overlay is affecting most of the frame… Because you need to define how fast the object needs to be moving… In order to be considered by the Selective. This is known as the Motion Minimum… Which is quite low at this point. You can increase this slider… Or you can pull the widget more away from the centre… And that will perform the same operation. Around 0.18 will be fine. Looking at the overlay in the Selective View… Only objects travelling in this direction… Above a certain speed… Are considered in the Selective. So using the widget… You can really mould where the Selective will be applied… Based on Motion. Looking at the bike in the Result view… You can see the Motion Blur being applied in the desired area. Now you can also set a maximum speed cut off… As well as increase the gain and fall off of the selective. But another useful slider is the Angular Threshold. So you’ve defined a direction for the bike… However, when you scrub to the beginning or end of the clip… The bike moves horizontally and not towards the bottom. So these frames are not considered in the Selective as much. So using the Angular Threshold… You can expand the angle of direction considered for the selective. So now the bike should be covered for whole shot. As a tip, the 3D AOV values can be animated. So if something changes direction quite drastically… You can animate the Selective to match. In addition, you can still use Keyers, masks… And all the tracking tools to segment a subject with motion… To isolate it further in the shot. This is something you can try on your own another time. You can now switch back to a 2-up view with ALT+2… And you can scrub through the result view. If the edges seem a little harsh on the motion blur… You can use shrink, dilate and blur the result of the selective… And this should fix most of these issues. However it is worth mentioning… that Motion Vector analysis in general… Works pretty well with motion heading in a straight direction. You may encounter artefacts with spinning motion such as wheels… As well as when two objects cross over each other from opposing directions. This is simply the nature of current technology. So in summary… The Motion 3D Selective allows you to isolate a portion of your image… Using the speed and movement of objects in your shot. This is all determined using Motion Vectors… Which can be supplied by CGI… Or generated through a Motion Vector Analysis. And like all the other Selective tools within the Flame products… You can apply any SelectiveFX shaders to these isolation mattes… And this can certainly give a new added dimension… in any grading, VFX and look development work. Don’t forget to check out the other features, workflows… And enhancements to the Flame 2020.1 update. Comments, feedback and suggestions are always welcome and appreciated. Please subscribe to the Flame Learning Channel for future videos… And thanks for watching.

3D Body Scanning Is Here, and It Could Change How You See Yourself

3D Body Scanning Is Here, and It Could Change How You See Yourself


Have you ever wanted to see a perfectly replicated
model of your insides, in stunningly realistic 3D? Yeah…me neither. Regardless, 3D body scanning is here, and
it’s got some surprising applications. But do people actually want to use it? There’s been a sudden growth spurt in the
market of 3D body-scanning technologies. They use a set of cameras to capture a 360-degree
view of your body and then produce a visual model. There’s an obvious possibility here to create
a replica of your body for a computer game, but this capability is also being leveraged
by companies for several other more real-world uses
On the digital retail front, 3D scans of a customer’s body could be used to help them
virtually ‘try on’ items of clothing. Amazon reportedly just a bought New York-based
start-up called Body Labs, maybe to allow customers to find better-fitting clothes when
browsing Amazon’s fashion section Things are a little further along in the fitness
sector. For a hefty price, you can buy a 3D body scanner
from Naked Labs that will report your metrics, like body fat and lean muscle mass, and gives
you a perfect 360 view of yourself as a little avatar on your phone. The funny thing here is that this body scanner
doesn’t ever look at the inside of your body. It’s only taking pictures of the outside,
and measuring your weight. To produce your report, it compares your data
to a large database of pre-existing body-composition scans. Based on this comparison, Naked Labs’ algorithm
projects what your body composition probably is. Naked Labs is marketing this product as something
that can help you see your fitness progress in minute detail…but can we go even deeper? Yes. With 3D x-rays. MARS Bio-imaging can take highly-detailed
3D scans of the inside your body. CT scans are a kind of imaging that make use
of x-rays, and the fact that MARS is providing a 3D cross-sectional view isn’t new, all
CT scans do that. But this new kind of scanner uses more specialized
detectors to differentiate the x-rays’ frequency as they pass through parts of your body, resulting
in an image that gives a much clearer distinction between materials, like muscle and blood and
skin–and in incredible detail, all the way down to the microstructure of the bone. Hopefully, this advancement can improve the
diagnosis and treatment of myriad diseases and bodily issues. But despite the many applications of 3D body
scanning, the questions remains…will people actually use it? The new 3D CT scans seem to make a lot of
people pretty squeamish. Our brains are hard-wired to dislike seeing
the insides of people because that could be an indication of disease or injury, and our
brain sees as a threat. But this doesn’t necessarily pose a problem
for medical practitioners, who kinda need to be resistant to that kind of squeamishness…it’s
in the job description. What about personal use of 3D body scanning
for fitness, though? A recent study revealed that people felt ‘dejected
and dissatisfied with their body’ after participating in a body scan study, potentially
because the image they were shown didn’t match up with the idea they had of themselves
inside their head, something called self-discrepancy theory. Naked Labs’ app even greys out the avatar
it creates for you supposedly so that you can look at it as ‘objectively’ as possible
and to try to get you to let go of any ‘emotional attachment’ you may have to the model of
your body. So even they recognize that our relationships
to our bodies can sometimes be fraught, and that this may be an issue in implementation. So we’ll see if this becomes a new consumer
tech staple, or if it falls flat. Gathering data about body metrics is definitely
trendy, but also a necessity for extreme athletes like Ben Lecomte as he swims across the Pacific. Ben and his team aboard the Seeker Vessel
are using a heart monitor to track Ben’s cardiovascular health. Check out this episode to see how it works. Let us know what you think in comments below,
and subscribe to Seeker for more health news. Thanks for watching.