How to add a picture as PowerPoint Slide Background

How to add a picture as PowerPoint Slide Background


Hi, I’m Ramgopal from Presentation-process.com In this video, you will learn how to add a picture as PowerPoint slide background. This technique comes in quite handy to have very interesting slide backgrounds for your presentations and let me show you how to do that in a step-by-step way Here I have a new presentation the first step in the process of inserting picture as slide background is you right click on the slide and go to ‘Format background’ option. Now, you will see this pane on the right-hand side and the option we want to choose is this one called as ‘Picture or texture fill’. As soon as you click on that you will have a default texture inserted by PowerPoint Don’t bother too much about this one. Because, we’re going to insert a picture from our file. So, I’m going to click on the option called ‘insert picture from file’ and I’m going to choose one of the images that will serve as slide background. and this is a picture that I have already saved on my computer and I’m going to say ‘Insert’ As soon as I did that, you can see that the picture is now added as a slide background. Let’s say, you want to have the same slide background for every slide that you want to insert in a presentation All you need to do is to choose this option called ‘Apply to all’ As soon as you do that you’ll see that any new slide that you insert like by going here You will have the new slide have the same slide background as well. aAnother thing I want you to note is once you insert a picture as slide background, You also have the option to change the picture properties as well All you need to do is to go to this option called ‘Picture’ and you can change picture colour or you can do some picture corrections For example, if you want to recolour this picture in some other tint All you need to do is to go to ‘Recolor’ option and you can insert a green accent or you can have an orange accent etc. So, all those options are available to you once you insert a picture as slide background. Hope you got some useful information from this video. As a thank you for watching this video this far I’m happy to present to you a wonderful mini training called ‘Five things you can do under five minutes to make your slides look more professional. It is a useful mini training for every business presenter. whether you are a business owner a business executive a trainer or a consultant you will find this mini training very, very useful You can sign up for the mini training by clicking on the link here. You can also sign up for the mini training by clicking on the link in the description area right below this video. Thanks a lot for watching the video and I will see you in the mini training.

Good Morning My Love | Morning Quotes With Pictures of Flowers for Husband or Wife

Good Morning My Love | Morning Quotes With Pictures of Flowers for Husband or Wife


Welcome to Greetings by Maria! Morning is the sweetest time of the day. It’s the best time to wish a very happy day to your loved ones through love quotes, beautiful flowers and a romantic video. This Good Morning Love video has all the romantic effects that your wife or husband is going to love a lot. It has the best love quotes and images of flowers. Wish this Good Morning to your love in a unique way by sharing them this video. Have a lovely day! With Love, With Love,
Greetings by Maria. Give this video a thumbs up,share and subscribe to my channel

How to insert picture in ms word 2016 | Picture in MS Word

How to insert picture in ms word 2016 | Picture in MS Word


This video contains”How to Insert Picture in MS Word”. This is a video tutorial process to Insert or add Picture in MS Word 2016 version”. To know this process you must watch this video tutorial. Watch this video, like, share, comment, enjoy and subscribe now.

How to take a picture of a black hole | Katie Bouman

How to take a picture of a black hole | Katie Bouman


In the movie “Interstellar,” we get an up-close look
at a supermassive black hole. Set against a backdrop of bright gas, the black hole’s massive
gravitational pull bends light into a ring. However, this isn’t a real photograph, but a computer graphic rendering — an artistic interpretation
of what a black hole might look like. A hundred years ago, Albert Einstein first published
his theory of general relativity. In the years since then, scientists have provided
a lot of evidence in support of it. But one thing predicted
from this theory, black holes, still have not been directly observed. Although we have some idea
as to what a black hole might look like, we’ve never actually taken
a picture of one before. However, you might be surprised to know
that that may soon change. We may be seeing our first picture
of a black hole in the next couple years. Getting this first picture will come down
to an international team of scientists, an Earth-sized telescope and an algorithm that puts together
the final picture. Although I won’t be able to show you
a real picture of a black hole today, I’d like to give you a brief glimpse
into the effort involved in getting that first picture. My name is Katie Bouman, and I’m a PhD student at MIT. I do research in a computer science lab that works on making computers
see through images and video. But although I’m not an astronomer, today I’d like to show you how I’ve been able to contribute
to this exciting project. If you go out past
the bright city lights tonight, you may just be lucky enough
to see a stunning view of the Milky Way Galaxy. And if you could zoom past
millions of stars, 26,000 light-years toward the heart
of the spiraling Milky Way, we’d eventually reach
a cluster of stars right at the center. Peering past all the galactic dust
with infrared telescopes, astronomers have watched these stars
for over 16 years. But it’s what they don’t see
that is the most spectacular. These stars seem to orbit
an invisible object. By tracking the paths of these stars, astronomers have concluded that the only thing small and heavy
enough to cause this motion is a supermassive black hole — an object so dense that it sucks up
anything that ventures too close — even light. But what happens if we were
to zoom in even further? Is it possible to see something
that, by definition, is impossible to see? Well, it turns out that if we were
to zoom in at radio wavelengths, we’d expect to see a ring of light caused by the gravitational
lensing of hot plasma zipping around the black hole. In other words, the black hole casts a shadow
on this backdrop of bright material, carving out a sphere of darkness. This bright ring reveals
the black hole’s event horizon, where the gravitational pull
becomes so great that not even light can escape. Einstein’s equations predict
the size and shape of this ring, so taking a picture of it
wouldn’t only be really cool, it would also help to verify
that these equations hold in the extreme conditions
around the black hole. However, this black hole
is so far away from us, that from Earth, this ring appears
incredibly small — the same size to us as an orange
on the surface of the moon. That makes taking a picture of it
extremely difficult. Why is that? Well, it all comes down
to a simple equation. Due to a phenomenon called diffraction, there are fundamental limits to the smallest objects
that we can possibly see. This governing equation says
that in order to see smaller and smaller, we need to make our telescope
bigger and bigger. But even with the most powerful
optical telescopes here on Earth, we can’t even get close
to the resolution necessary to image on the surface of the moon. In fact, here I show one of the highest
resolution images ever taken of the moon from Earth. It contains roughly 13,000 pixels, and yet each pixel would contain
over 1.5 million oranges. So how big of a telescope do we need in order to see an orange
on the surface of the moon and, by extension, our black hole? Well, it turns out
that by crunching the numbers, you can easily calculate
that we would need a telescope the size of the entire Earth. (Laughter) If we could build
this Earth-sized telescope, we could just start to make out
that distinctive ring of light indicative of the black
hole’s event horizon. Although this picture wouldn’t contain
all the detail we see in computer graphic renderings, it would allow us to safely get
our first glimpse of the immediate environment
around a black hole. However, as you can imagine, building a single-dish telescope
the size of the Earth is impossible. But in the famous words of Mick Jagger, “You can’t always get what you want, but if you try sometimes,
you just might find you get what you need.” And by connecting telescopes
from around the world, an international collaboration
called the Event Horizon Telescope is creating a computational telescope
the size of the Earth, capable of resolving structure on the scale of a black
hole’s event horizon. This network of telescopes is scheduled
to take its very first picture of a black hole next year. Each telescope in the worldwide
network works together. Linked through the precise timing
of atomic clocks, teams of researchers at each
of the sites freeze light by collecting thousands
of terabytes of data. This data is then processed in a lab
right here in Massachusetts. So how does this even work? Remember if we want to see the black hole
in the center of our galaxy, we need to build this impossibly large
Earth-sized telescope? For just a second,
let’s pretend we could build a telescope the size of the Earth. This would be a little bit
like turning the Earth into a giant spinning disco ball. Each individual mirror would collect light that we could then combine
together to make a picture. However, now let’s say
we remove most of those mirrors so only a few remained. We could still try to combine
this information together, but now there are a lot of holes. These remaining mirrors represent
the locations where we have telescopes. This is an incredibly small number
of measurements to make a picture from. But although we only collect light
at a few telescope locations, as the Earth rotates, we get to see
other new measurements. In other words, as the disco ball spins,
those mirrors change locations and we get to observe
different parts of the image. The imaging algorithms we develop
fill in the missing gaps of the disco ball in order to reconstruct
the underlying black hole image. If we had telescopes located
everywhere on the globe — in other words, the entire disco ball — this would be trivial. However, we only see a few samples,
and for that reason, there are an infinite number
of possible images that are perfectly consistent
with our telescope measurements. However, not all images are created equal. Some of those images look more like
what we think of as images than others. And so, my role in helping to take
the first image of a black hole is to design algorithms that find
the most reasonable image that also fits the telescope measurements. Just as a forensic sketch artist
uses limited descriptions to piece together a picture using
their knowledge of face structure, the imaging algorithms I develop
use our limited telescope data to guide us to a picture that also
looks like stuff in our universe. Using these algorithms,
we’re able to piece together pictures from this sparse, noisy data. So here I show a sample reconstruction
done using simulated data, when we pretend to point our telescopes to the black hole
in the center of our galaxy. Although this is just a simulation,
reconstruction such as this give us hope that we’ll soon be able to reliably take
the first image of a black hole and from it, determine
the size of its ring. Although I’d love to go on
about all the details of this algorithm, luckily for you, I don’t have the time. But I’d still like
to give you a brief idea of how we define
what our universe looks like, and how we use this to reconstruct
and verify our results. Since there are an infinite number
of possible images that perfectly explain
our telescope measurements, we have to choose
between them in some way. We do this by ranking the images based upon how likely they are
to be the black hole image, and then choosing the one
that’s most likely. So what do I mean by this exactly? Let’s say we were trying to make a model that told us how likely an image
were to appear on Facebook. We’d probably want the model to say it’s pretty unlikely that someone
would post this noise image on the left, and pretty likely that someone
would post a selfie like this one on the right. The image in the middle is blurry, so even though it’s more likely
we’d see it on Facebook compared to the noise image, it’s probably less likely we’d see it
compared to the selfie. But when it comes to images
from the black hole, we’re posed with a real conundrum:
we’ve never seen a black hole before. In that case, what is a likely
black hole image, and what should we assume
about the structure of black holes? We could try to use images
from simulations we’ve done, like the image of the black hole
from “Interstellar,” but if we did this,
it could cause some serious problems. What would happen
if Einstein’s theories didn’t hold? We’d still want to reconstruct
an accurate picture of what was going on. If we bake Einstein’s equations
too much into our algorithms, we’ll just end up seeing
what we expect to see. In other words,
we want to leave the option open for there being a giant elephant
at the center of our galaxy. (Laughter) Different types of images have
very distinct features. We can easily tell the difference
between black hole simulation images and images we take
every day here on Earth. We need a way to tell our algorithms
what images look like without imposing one type
of image’s features too much. One way we can try to get around this is by imposing the features
of different kinds of images and seeing how the type of image we assume
affects our reconstructions. If all images’ types produce
a very similar-looking image, then we can start to become more confident that the image assumptions we’re making
are not biasing this picture that much. This is a little bit like
giving the same description to three different sketch artists
from all around the world. If they all produce
a very similar-looking face, then we can start to become confident that they’re not imposing their own
cultural biases on the drawings. One way we can try to impose
different image features is by using pieces of existing images. So we take a large collection of images, and we break them down
into their little image patches. We then can treat each image patch
a little bit like pieces of a puzzle. And we use commonly seen puzzle pieces
to piece together an image that also fits our telescope measurements. Different types of images have
very distinctive sets of puzzle pieces. So what happens when we take the same data but we use different sets of puzzle pieces
to reconstruct the image? Let’s first start with black hole
image simulation puzzle pieces. OK, this looks reasonable. This looks like what we expect
a black hole to look like. But did we just get it because we just fed it little pieces
of black hole simulation images? Let’s try another set of puzzle pieces from astronomical, non-black hole objects. OK, we get a similar-looking image. And then how about pieces
from everyday images, like the images you take
with your own personal camera? Great, we see the same image. When we get the same image
from all different sets of puzzle pieces, then we can start to become more confident that the image assumptions we’re making aren’t biasing the final
image we get too much. Another thing we can do is take
the same set of puzzle pieces, such as the ones derived
from everyday images, and use them to reconstruct
many different kinds of source images. So in our simulations, we pretend a black hole looks like
astronomical non-black hole objects, as well as everyday images like
the elephant in the center of our galaxy. When the results of our algorithms
on the bottom look very similar to the simulation’s truth image on top, then we can start to become
more confident in our algorithms. And I really want to emphasize here that all of these pictures were created by piecing together little pieces
of everyday photographs, like you’d take with your own
personal camera. So an image of a black hole
we’ve never seen before may eventually be created by piecing
together pictures we see all the time of people, buildings,
trees, cats and dogs. Imaging ideas like this
will make it possible for us to take our very first pictures
of a black hole, and hopefully, verify
those famous theories on which scientists rely on a daily basis. But of course, getting
imaging ideas like this working would never have been possible
without the amazing team of researchers that I have the privilege to work with. It still amazes me that although I began this project
with no background in astrophysics, what we have achieved
through this unique collaboration could result in the very first
images of a black hole. But big projects like
the Event Horizon Telescope are successful due to all
the interdisciplinary expertise different people bring to the table. We’re a melting pot of astronomers, physicists, mathematicians and engineers. This is what will make it soon possible to achieve something
once thought impossible. I’d like to encourage all of you to go out and help push the boundaries of science, even if it may at first seem
as mysterious to you as a black hole. Thank you. (Applause)

Researchers reconstruct house from old Pompeii using 3D-technology

Researchers reconstruct house from old Pompeii using 3D-technology


Our research in Pompeii started in 2000. It is still going on, but we hope to have finished
with the documentation of this city block this or next year. The use of this technology when connected with
more traditional approaches can give us the possibility of interpreting the past with
much more accurate way. And obviously understanding deeply how the life in
Pompeii was in the past. The fascinating thing is that you enter history,
you are enter an historical context. You enter a house with four or five metres
high walls. You can go in a journey back. There is also so much evidence to discuss
how people lived there 2000 years ago.

The Secret Technique for 3D Drawings! – How to Draw an Anamorphic Rose in 3D

The Secret Technique for 3D Drawings! – How to Draw an Anamorphic Rose in 3D


Hello my friends and welcome
to another Tuesday of tutorial! I am Leonardo Pereznieto
and today I will show you the secret to do a drawing in 3D. An anamorphic drawing. To achieve this, you need
two templates: one with regular squares and one in perspective. To start we sketch a drawing in the template that is in perspective, we do it normally as if we were drawing in a blank piece of paper. Only be careful of not going out of the template. In this case I will do a rose on a vase. Next we copy the other template
the one that is straight on a piece of paper. You may copy it to the same size
or you may measure the squares and draw them to the double
or quadruple or ten times. For this example I did them
2.5 times bigger than the original. Now, aided by the letters
and numbers of each line, we copy our original sketch
to this template. But as the squares are different
you will notice that the top of your drawing is going to become bigger
and generally longer. This is pretty fun to do. If we look at it from above it will appear distorted but if we view it from the correct viewpoint, it should look correct or even in 3D. When we have the form of our drawing, we can erase all the lines and start coloring or shading. In this case, I am using watercolor pencils. The complete list of materials
is in the information bellow the video. For now, I am using them dry. I will correct some leaves which I didn´t like how they looked in the position they were in. That´s better. I use two tones of green to do the lights and shadows. For the petals I will use mainly two tones of red, one is practically an orange
and the other one is a dark deep red. This time I will use them with water. When they are wet they are more intense and they mix better. By the way, you can download the templates for free in my blog at my website fineartebooks.com the direct link is in the information bellow the video. For the darkest shadows you can use a purple. You may also mix with a wet brush. I will give it some lights with yellow. Let´s draw the shadow with a projected spot of light. It´s nearly ready. A trick to make it looked even more in 3D is to cut the border of the paper
making your drawing go over it. In this case I will also cut
the other sides just because my drawing is a little bit small
for this paper. And it is ready! By the way to see the 3D effect,
you need to look through the camera or if you are not using a camera
then close one eye, and to make sure you get a good result,
keep the other open! [laughs]. I hope this was helpful. If you enjoyed it please
give it a LIKE and subscribe to this channel. You know where to follow me,
the links are bellow… and I´ll see you, on Tuesday đŸ™‚

What is inside a digital camera? (2 of 2) | Electrical engineering | Khan Academy

What is inside a digital camera? (2 of 2) | Electrical engineering | Khan Academy


OK. So I took a part two of
the exact same camera. And I did it, to be
honest, because I took the first one apart to
identify the parts inside of it and had some issues
getting it back together because some of the
screws just were stripped, and it did not want
to go back together. And so I bought another
one and found out something kind of interesting. So this camera,
a lot of products go through different revisions
that don’t necessarily show up in model numbers. So that means that
the manufacturer may have found a better
way or a less expensive way to produce something. Or it could be that they
just found a cheaper way to produce
something and were able to improve their margins. And so with this camera
being so low cost, I’m sure the margins are
of a great deal of interest to the manufacturer. If you look, this one says
version 3.8 and this one says version 3.3. So this is an older version. This version has an epoxy potted
central processing unit or CPU. And this one has just a
plastic cover over it. So the plastic cover may not be
as robust as the epoxy potting. I’m not sure, but it doesn’t
look quite as stable. It would be harder to
mess up those wires. And when I pop
this off, the wires are clearly messed
up to the CPU there. And obviously, the boards
are totally different colors. I guess that lets
the manufacturer know that they are
different revisions. If you the flip the boards over,
you can take a look at the– let me zoom in just
a little bit on this. You can see the
charge couple device or the light capturing
device is very different. So this one looks
larger, more robust. This one looks a lot smaller. So it appears that
what they’ve done this is a cost reduction,
probably a reason to change. Also, the onboard memory is
a different manufacturer. So it may be that this memory is
less expensive, the new memory. This board doesn’t
need this capacitor. And so perhaps this
electrolytic capacitor was a cheaper way of storing
a charge during the photo process. And if you look at
the lenses here, this one’s got a big
lens cover over it. And this one has no
lens cover over it. So that probably added cost. I don’t believe that’s
actually the lens. I think it’s still
that tiny little thing that you see inside
this one here. But it looks bigger with that. And they probably just got
rid of it to reduce cost. So anyway, that’s the difference
between one of the earlier iterations and one of
the later iterations. But as far as consumers go, you
wouldn’t know any difference at all when you bought the
two cameras because they’re virtually identical
on the outside.

Mitsubishi DJ-1000: World’s Smallest Digital Camera (in 1997!)

Mitsubishi DJ-1000: World’s Smallest Digital Camera (in 1997!)


Greetings and welcome to an LGR camera thing! And this is the Mitsubishi DJ-1000 digital
still camera, costing $249 US dollars when it launched in the latter half of 1997. And yep, that is the same Mitsubishi that
you may know for their cars and trucks, although it’s not from the same division. Mitsubishi Electric was and is a massive company,
with dozens of branches, subdivisions, and business units. And of course one of those divisions made
digital cameras in the ‘90s, but it seems it was short-lived. The DJ-1000, or DJ-1 as it was sometimes called,
was Mitsubishi’s one and only consumer digital camera, one of the most unique of its kind
in 1997. It was by far the smallest and lightest-weight
digital camera in the world when it was announced at PC Expo ‘97 in New York, weighing in
at just 2.8 ounces or 80 grams. But it also didn’t receive widespread distribution,
initially sold exclusively through T-Zone stores in the US, of which there were only
two when the DJ-1000 hit the market. It also saw distribution in Mitsubishi’s
home country of Japan as you’d expect, and in Europe under the Umax brand where it was
known as the Umax PhotoRun. But yeah, these days you’d be hard-pressed
to find anyone that remembers the DJ-1000 at all, much less owned one, so I was more
than happy to find this one new, complete in box. Inside is a neatly-packed plastic bag full
of goodies, a cardboard tray with memory card stuff, and finally the camera encased in bubble
wrap. And man, I knew this thing was gonna be small,
but wow. It’s really small! It’s about the size of a deck of cards,
able to fit happily inside a shirt pocket. Compare this to the most popular digital camera
of 1997, the Sony FD Mavica, and the difference in size and weight is ridiculous. Granted the Mavica used 3.5” floppy disks,
so maybe comparing it to the something like the Fuji DS-7 is more appropriate, but still. Even against that the DJ-1000 remains miniscule,
which is extra impressive considering the Fuji uses SmartMedia cards and the Mitsubishi
uses CompactFlash. Yeah that’s right, the thinnest camera on
the market used the thickest memory card format on the market, go figure. It came with this two megabyte memory card
in the box, easily the lowest capacity CF card I’ve ever seen. This version of the package also came with
this PCMCIA card adapter, ideal for laptop users, though from what I’ve read Mitsubishi
also offered a desktop package with another adapter. As for the bag of goodies you get the photo
retrieval software for both Windows 3.1 and 95 in English and Japanese, a very blue soft-cover
carrying case that holds that camera quite snugly, a wrist strap that attaches to the
right-hand side of the camera, and several bits of documentation in both English and
Japanese. I especially dig this instruction booklet,
with its automobile service manual aesthetic and a message saying that it is important
to you. Although of all the cameras I’ve covered,
this is the ultimate in terms of simplicity, so almost all of this information pertains
to using the DJ-1000 software for Windows. And well, looking at the camera you can see
why. There’s almost nothing going on here, you
just turn it on, point, and shoot. That’s it! No settings to set, no adjustments to adjust,
nothing but a power switch and a shutter button. There’s not even a flash on the front, only
a passthrough window for the viewfinder and its tiny camera lens, a 5.8mm fixed focus
lens with an aperture of 2.8 and an auto shutter speed ranging from 1/60 to 1/15,000 of a second. On top is the shutter button and the power
switch and along the bottom is where you insert the memory card. There is no tripod mount. And then there’s the back of the camera
which is covered in a surprising amount of text. Guess they didn’t have anywhere else to
put this stuff so why not, because there’s not much going on back here. Just the viewfinder, a spot to install two
triple-A batteries, and this pair of LEDs. Since there’s no LCD screen and no sound
from the shutter, these are your only indications that anything is going right or wrong with
the DJ-1000. The top red LED lets you know if there’s
card activity or the battery is low, and the bottom LED flashes green, red, or some combination
of the two to indicate memory card status. When you power it on the lights all light
up and then the bottom LED turns green if it’s ready to take a picture. Press the shutter and you’ll see the top
LED turn red. When you’re running low on memory the bottom
LED lights up green and red, then solid red for the final shot, and eventually it’ll
flash red when it’s full. The two megabyte card holds fifteen photos,
but it supports CompactFlash cards up to fifteen megabytes, which provides an image capacity
of 113. Interesting to note that one of these high-capacity
cards would’ve cost more than the camera itself back in ‘97, at around $260 apiece. And you really wanted a second card back then
because there is no way to delete photos from the camera, so it’s either swap cards or
transfer your images to a computer. Once you’ve taken some pictures it’s time
to develop them through a Windows PC. And yes I do mean develop, since this stores
images in a proprietary file format exclusive to this camera. So even though it uses a standard CF card
that’s readable on a modern PC, you still need the DJ-1000 Viewer software that it came
with. Otherwise all you’ll see is a folder with
a bunch of DAT files, so open up the Viewer application and run the Index command. It’ll then read the photos, generate thumbnails,
and from here you can convert them into standard bitmap images one by one. So let’s take a look at them! As usual with older cameras I enjoy taking
photos of things that would’ve been around when it was new, in this case the late 1990s. And yeah, for that purpose this camera fits
the bill wonderfully. There’s something about that early consumer
digital camera aesthetic that charms the pants off me no matter what. And the DJ-1000 in particular has a look to
it that made it really fun to play with over the past month or so. The image quality isn’t too bad, though
the saturation is always low and the color temperature skews to the cooler side. It also has this particular type of spotty
pixelation and dithering that becomes more apparent on vivid, solid colors, kinda looks
like an early FMV game. Take a look at this comparison to my phone’s
camera and you get an idea of how it’s affecting things. Makes it pretty exciting to take pictures
and get ‘em onto a PC so you can see what unpredictable weirdness you ended up with. Then there’s the way it handles specular
highlights and lighting of a certain range in brightness, check it out. You get these green streaks protruding downward
from anything bright enough, like reflecting sunlight and white or shiny surfaces. This alleyway shot in particular looks crazy,
it made it look like the building in the background was casting a shadow but it was actually just
freaking out at the bright blue sky up against the dark brick walls that turned purple. And this one is probably my favorite, it was
taken sideway and then rotated, and with the green trails from the reflecting light? It looks like this car was speeding by, even
though it was standing still at the time. I’ve seen similar things on other older
digital cameras without an infrared filter, but this particular style of strange on the
DJ-1000 is just fascinating to me. And yes I also tried it with a UV filter;
it made no difference! There are also an assortment of image adjustments
you can perform through the Viewer application, like color balance, contrast, and brightness. My favorite though is “sherpness.” Ermahgerd sherpness, it’s so sherp! Then there’s resolution, which is a distinctly
separate function from resizing. The DJ-1000 shoots using a 1/5-inch Sanyo
CCD that by default produces photos with a resolution of 320×240 pixels. But that’s just the “normal” resolution. If you choose “high” resolution from the
Viewer program, it’ll re-open the photo and output it at 504×378. That’s an increase of 57%! And yeah there’s a legit difference, it’s
not just upscaling the image. This is a picture at normal resolution, and
here’s the exact same picture reprocessed in high resolution. It’s still low-res by today’s standards,
but it’s notably cleaner and reveals more detail, and you even end up with an ever-so-slightly
higher field of view. There’s also a bit more of that green light
on the left-hand side, adding one more quirk to the unique visual quality of the DJ-1000. And finally, the last thing I want to mention
is the fact that deleting photos is a bit weird. Like, you’d think you’d be able to just
go into Windows Explorer and delete them that way, right? Nope! I learned this the hard way, but if you do
that then the camera will think the card is still full. Apparently this is due to some kind of conflict
with how Windows 9x and above handles deleted files and the indexing done through the camera
software. I thought I’d just be able to reformat the
card and it’d be fine but that didn’t work either, it just thought the card was
still full. I had to put the images back onto the card,
go into the camera software and delete them there, reindex the folder, and then it was
fine. What a pain. And that is the Mitsubishi DJ-1000 digital
camera from 1997. A somewhat annoying little thing but an absolutely
charming one nonetheless. This is one of those situations where I adore
a piece of retro tech so much precisely because it’s so confined in capabilities and finicky
in functionality. I really enjoy the weird, grainy, off-color
images it produces, and I absolutely love how it feels in the hands. Its thin, lightweight metal construction is
just a pleasure to hold, and the fact that it’s an obscure digital camera from 1997
makes it all the more fun. Shame that Mitsubishi never made a successor,
but oh well. At least we got the DJ-1000! And if you enjoyed this digicam retrospective
then might I recommend a couple more? You can also subscribe for more videos every
week here on LGR. And as always I thank you very much for watching!

Drawing pictures with music!

Drawing pictures with music!


*Fast Music Playing* Just incredible If you don’t know who Aleksander Vinter is (Also known as Savant) You should be following him, he’s an amazing music producer He started posting these MIDI drawings that he does This is the most popular one I think so far, it’s gotten 3 Million views in about a week He’s done a monkey, (nope, that’s a rare Pepe, Andrew) A T-Rex A dragon that plays the Super Mario theme. I don’t know how he’s doing this but I’m gonna try and figure it out today. And obviously I’m gonna make a unicorn. I’m gonna make This Unicorn *Music Playing* *Alert Sound* *Music Playing* *Opens box, takes out paper, waggles paper, printer prints* So here’s my thought process. My transparncey is lined up with this edge of the screen. So if anything happens to it, I can always put it back In exactly the same place. I made a MIDI clip and set a tempo And kind of test it out. How long it’s gonna take to play the whole unicorn And this MIDI clip is a fixed size So if I wanna go in and do some edits I can do it, But I can always zoom out And get to the same place To see the whole picture of the unicorn Without notes being in the wrong spots The first thing I’m gonna do is trace the unicorn as accurately as I can. *Music Playing* I have now traced it to the best of my ability It was a difficult balance to strike between getting the image to come across And also getting in enough detail that felt like a good picture So this was just drawing it, not thinking about the music at all. Let’s see how amazing it sounds! *Jumbled up music playing* Nailed it! *exhale* Okay! And now we attempt to make this sound musical. And I can It’s gonna sound better Okay. First thing let’s quantize All the notes to 30 secooooooonds And this just get’s everything on beat. Still looks like a unicorn, Okay. I’mma move this whole thing up Because a lot of the notes were kinda low. I’m just gonna see how many of those low notes we can get rid of And have it still look like a Unicorrrrrrrrrrrrrrrrrn (Discordant notes in the background) Just kinda free hand remaking this mane. Come on mane! I’m gonna say that still looks pretty good for a unicorn mane But it isn’t so busy right off the bat Let’s give it another listen… *High Pitched And Low Pitched Unharmonistic Music Playing* Ooh… maybe I made this to tall! That’s a huge range of notes. It’s still really high and really low. Our lowest note is D So I’mma say where in the key of D Minor So if i was using a minor key. So now I’m gonna go in the first bar and delete every note Or move every note so that it fits in D Minor Or a melody that would go over top of D Minor *Note moving noises* *Kind of Jumbled Up Music Playing* What I’m learning Is that it’s pretty difficult To make it sound good Just using what I have drawn So I need to take the idea Of what I have drawn And recreate most of it With the music in mind I get a feel for the shape of the picture I zoom in, I make adjustments to the notes musically I zoom out to see if it still looks unicorn-y Also, look at how our picture changes when I zoom in SMOOSH DAT UNICORN And I got a first bar now that sounds pretty musical *Good Music Playing* Let’s keep going *Cool Harpsichord Sound Playing The Good Part* I’m starting to hear this was kind of a baroque piece So I have slowed down the tempo And changed the instrument to this nice harpsichord * Harpsichord Music Plays And Then Gets To The Bad High Pitched Part* Bad high pitch part appears Look at this awfulness Corrupt file I saved my MIDI Unicorn, no problems there I went to have lunch When I came back to open it up It was corrupt Sad inhale and exhale* This is super annoying! So next video I will give you A glorious MIDI Unicorn And for today You can have this *Horrible Music Playing* And now I will thank you for watching And dejectedly walk away *Sad, Defeated Sounding Piano Music Playing*

CES 2020: Nikon D780 | Crutchfield

CES 2020: Nikon D780 | Crutchfield


Hey this is JR here at the Nikon booth
at CES 2020 in Las Vegas, holding in my hand the brand-new Nikon D780. They
just announced this here at CES this week, and we already are taking
pre-orders for it at Crutchfield so you might want to check it out. Here’s what’s
exciting about it. It’s got a twenty four point five megapixel backside
illuminated full-frame sensor. Gorgeous images on this. There’s a 51-point autofocus system that you can use in the viewfinder, and when you’re going
in live mode or movie mode, you can take advantage of the 273-point
hybrid focus, zooming out, zooming in. And shutter speed, let’s talk about that real
quick. All the way down to as quick as one eight thousandth of a second, or as
long as nine hundred seconds, up to about fifteen minutes for a super long
exposure shot, nighttime, stars, whatever it is you want to shoot that way, The D750
it replaces only went to one four thousandth of a second and only as long
as about thirty seconds, so big improvements here in the shutter speed.
The full-frame sensor images look fantastic. This is the new Nikon D780,
announced here at CES. Check it out at Crutchfield. If you have any questions on
cameras in general, please give us a call, chat with us online, send us an email, we
have advisors who have got experience with cameras ready to help you get the
right camera for you.