Google Translate App – use your camera to easily translate

Google Translate App – use your camera to easily translate


Hi everyone this is Brad and we are taking a
look at Google Translate and how Google Translate almost has an augmented
reality type feel to it you can see at the very top that’s gonna
translate from English as Spanish if you want to reverse that select the arrows
in the middle and on now girl from Spanish to English for some reason you’d
like to change language go head in and then you can type the language that you
would like now we’re going to go back we’re gonna go from Spanish to English
you can see the options at the bottom where there’s the voice conversation
handwriting and a camera we’re gonna use a camera in just a little bit but what
we want to do first is download this offline translation file it’s really
helpful to have download it doesn’t take too long to download when you’re ready
it doesn’t take too long select the camera you’re gonna need to allow access
to the camera and determine it’s in Spanish is 10 gossip and boy and idea
which means have a good day so it’s written on my computer and you can see
right in there it has now been translated so there you go that’s a
Google Translate app if you have any questions let me know thanks for
watching take care bye bye

New Feature in 1.22 – Camera Shift

New Feature in 1.22 – Camera Shift


Update 1.22 introduces a new feature called Camera Shift. This allows you to offset your camera in order to give you greater control of what you see on the battlefield. To do this, swipe on the mini-map in the direction you want the camera to move in. This allows you to look further ahead while still giving you full control of your movement and abilities. When you want to return the camera to the default position just tap on the minimap and it will center back onto your hero. This is extremely useful for abilities with a long range, such as Baron’s Jump Jets. The ability can reach beyond the edge of the screen. Normally in order to use Jump Jets at it’s maximum distance, you would have to tap on the ability, hold on the minimap and then tap again to use the skill. However now you can shift your camera in the direction you want to jump, making maneuvers like leaping over the Kraken pit much easier to execute. Now  let’s look at a couple of examples of the Camera Shift in a real game. In this clip, Baron is currently fighting against an enemy Ringo. After landing one of his porcupine mortars, he shifts his camera forward in order to get a better view of the lane. This allows him to more accurately aim his Ion Cannon at where he suspects Ringo will be. Shortly afterwards the ultimate connects and immediately deletes the enemy hero from the Halcyon fold. In the next example, the enemy team is attempting to capture the Kraken. Skye uses her forward barrage in order to force them off the objective and manages to get the Reim low, who started to back away from the fight. Skye then shifts her camera towards the locked-on Reim in order to keep him in view. Thanks to the Camera Shift, she is able to land a long range Suri Strike to cut off his escape. Afterwards another Forward Barrage allows Skye to pick up the kill. Now that you know how the new Camera Shift feature works, make sure to give it a try in the next update. It might take a bit of getting used to, but I think you’ll find there are a lot of situations where this becomes handy. Good luck out there on the Halcyon Fold.

InDesign How-To: Put One Image in Multiple Frames (Video Tutorial)

InDesign How-To: Put One Image in Multiple Frames (Video Tutorial)


Hi, I’m Erica Gamet with InDesign Secrets.
In this video, I’m going to show you how to place one image inside multiple
frames inside of InDesign. On this page, I have several different graphics frames
ready for images to be placed inside. And I can tell they’re graphics frames because
they have the blue X in the middle of them. But what I want to do is place one
image inside all of these, so I get sort of this windowpane effect. To do that, I
need to create what’s called a compound frame. I’m going to select all the frames
on my page, go up under the Object menu, down under Paths and choose Make
Compound Path. And when I do that, you can see that I now have one big X that
basically tells me that’s one big image frame. They’re still sort of separate, but
it’s going to act as one frame. Now I can go up under my File menu, under Place and choose an image I’d like to place. There’s a nice picture to be looking at
outside your window. And since I have Replaced Selected Item already selected,
it will automatically put this image inside that frame we just created. I’m
gonna go ahead and click open…and it drops it inside. Now it’s not quite the
image that I was hoping to see. What happened was, it placed it at the size
of the image and it’s too big. So I need to head up to the Object menu, down to
the Fitting sub-menu and say Fill Frame Proportionally. Now it fits so that the
image fills the entire frame. Now remember this compound frame acts as just any other regular image frame. So I can actually use my Content Grabber to
click on the image and manipulate it in any way. Like I might want to hold down my Shift key, maybe the Option or the Alt key and just drag, just to make it a
little bit bigger, and I can also move this around inside the frame. Let’s just
frame that up nicely so we can see everything. Beautiful. On my next page I
have a series of circles. In this case, I’m going to place an image and sort of
position it where I want it to be and then we’re going to paste it inside our
compound path. So let’s go up to the File menu and choose Place and grab our image and I’m not going to Replace Selected Item…just gonna say open. And I’m going to just click and drag, sort of on top of
these circles that are here. I kind of want to get an idea of where everything
needs to be sitting. I can also send that to the back…so that I can see my image
sort of back behind those circles. And I’m doing that because I want to make
sure that all the components of this compound path are in place before I make a compound path. So I’m going to choose the Selection tool and just sort of move these images around. Now I do have white fills on those circles. It makes them
easier to see but also harder to see the image back behind. When I’ve got it about
where I want it I’m gonna leave it and I’m going to Command- or Control-X to cut that image. Now I’m going to go ahead and select all of my circles, go up under the
Object menu, down under Paths, Make Compound Path. And now I’ve got my
compound path ready to drop my image inside. I’m gonna go up to the Edit menu
and choose Paste Into, and it pastes it into that path exactly where it was when
I copied it. Again, you’re not stuck with this. You can go ahead and use the
content grabber and move that image just like you would in any other frame. On a new page, I’m going to go ahead and place an image and then draw some frames on top of that to place that image into. I’m gonna go up to the File menu and choose Place and click Open…and I’m just going to go
ahead and drag this out. And I want to draw some frames on top of this image. Now I want to make sure they’re easy to see, so I’m gonna make sure that my
stroke color and my size are easy enough to see. Now I just want to draw out some
regions that I want this image to fill. So I’m going to go ahead
draw out some squares. As I’m drawing them, I can actually see how they’re
coming together. I can use my Smart Guides to guide me. And make sure the
family gets in its own little frame here. I’ll draw that out very roughly. Then I’m
going to use the Selection tool, select the image, Command- or Control-X to cut
that, select all those frames, go up under the Object menu, Paths, Make Compound
Path, and choose Edit, Paste Into. And there you go! There are a couple ways to
take one image and put them inside multiple frames inside your InDesign
document. Well, I certainly hope you found this tip helpful. Be sure to check out InDesignSecrets.com for thousands of InDesign articles and tutorials, and to
subscribe to our monthly publication, “InDesign Magazine.” Thanks for learning
with us!

Do you see what I see? Harnessing brain waves can help reconstruct mental images

Do you see what I see? Harnessing brain waves can help reconstruct mental images


Researchers have developed a new technique to reconstruct images based on what we perceive Participants are hooked up to EEG equipment and shown images of faces Their brain activity is recorded and the images are digitally recreated using machine learning algorithms It’s the first time this has been done using EEG data It may one day lead to a range of neurotechnology applications … Like a way to communicate with those who can’t communicate verbally or use sign language … Or forensic uses for law enforcement in gathering eyewitness accounts

Citrix Automation, Image and Text Automation 4.1

Citrix Automation, Image and Text Automation 4.1


Hello and welcome to UiPath Essential training
‐ Image-based automation. In a previous training you learned about recording
desktop applications, and how to manually add actions to the recorded workflow. You saw examples for 3 out of the 4 available
types of recording: Basic, desktop, and Web. Today you’ll get to know the 4th one, Citrix. The tools and methods it uses are made for
automating virtual machines, but they are useful in other types of automation
too. VMs usually run on a server, and only the
image of the interface is streamed to the user. Therefore, UiPath cannot address the interface
through the operating system, as you learned, with Basic recording and selectors. We’ll need some special tools and techniques for that, and fortunately UiPath has methods
of dealing with these constraints. But there is one important limitation: you
cannot automatically record a set of actions, and have them played back. You have to hold the robot’s hand, and show
it what to do; at least for now. So, how do you do that? You try to teach it how a human does it: by
finding elements on the screen – like buttons or text fields – and doing something, in
relation to those elements: clicking buttons and controls, or filling in a text box. Let’s see a practical example. We have this Remote desktop connection, and
on the remote desktop we have a dummy banking app. Let’s say we want to click on some radio
buttons and fill in a few text fields. What we’d normally do is read the name
of an element, then click the corresponding control, and maybe type something. A robot does the same. Using clickImage, you can click on mostly
anything: buttons, menus, text, or somewhere relative to such an element. As long as the image you choose is unique. Let’s select this area for the Deposit radio
button. There is no other identical element on screen
so we should be good. We can choose where to perform the click,
inside or outside the selected area. We’ll hit ok for a click in the center of
the rectangle. Right; next, let’s fill out these fields
using the same method. Since we cannot select something
as generic as a textbox, we’ll choose something unique that’s close to it, and doesn’t
move in relation to our textbox. It’s name is usually a good choice. So, select the Cash-in area, and this time
choose to indicate where, relative to the yellow rectangle, to actually make the click. We’ll pick the text field. And type something to help us see the focus. And we’ll do the same for the next one:
ClickImage action, click&drag to select, indicate position… in the textfield next to it. And type. You could use this method for the 3rd editable
field too, but let’s see a complementary action, clickTxt. The difference from its sister method, is
that clickText uses OCR to scan the screen of the virtual machine for text. Hence, it works even if things
like windows-theme or text size are different. ClickImage, on the other hand, is faster and
very reliable, but more sensitive to such graphical variations, and can fail if colors
or background details change. So it’s more prone to errors in fluctuating
environments. So let’s try clickText on the last input
field. You’ll recognise this window from the ScreenScraping
training, but this one has a few more options on the bottom here. The Text-to-be-found field, is the text that
you want to click on, or click relative to. We want to click on the 3rd textfield, so
we’ll just enter it’s name. Now UiPath will click on it. We want to click next to it, so we’ll click
Set mouse position, and just like before, we’ll indicate where to click; in this case,
on the editable field next to it. After we insert one more type action, we’ll
do another clickText to press the Accept button. That’s it, exit and run the generated workflow. Great. It correctly identified the radio button and
the text fields, clicked in the designated spots, and typed in the text fields. Inside the generated workflow, there are a few important parameters you should know about. For the clickText and clickImage actions,
options for single or double click, and left or right mouse buttons can be changed here,
and modifier keys can be added here. Click Image also has an Accuracy parameter
you can play around with, and at it’s maximum value, 1, the images must match 100% to register
as found. 0.8 is a good balance between accuracy and
reliability, but feel free to try other values. You should know that ClickImg and clickTxt
have some drawbacks which can make them, in some situations, not 100% accurate. On virtual machines, ClickTxt relies on OCR
to find the desired text, so even one misidentified letter can throw it off track. And clickImage is very reliable, unless the
theme or other graphical elements have changed. So here’s an easy alternative that can be
faster, safer, and works in desktop & web apps too. We’ll go in further detail about it in a
future training, but it’s pretty straightforward, so here’s how it works. The trick is simply to avoid using mouse interactions,
and replace them with keyboard actions such as navigating with the Tab key, and using
keyboard shortcuts to activate different functions of the application. Basically, just add Tab and other navigation
keys using the type tool. For example, let’s type in this field, and
after the value, we’ll add a Tab key to move to the next field. And so on for the rest. Or, you can combine multiple keyboard actions
into a single one. Type the first value, then add a Tab, type
the value for the next field, then add a tab, and so on. After the last field, the focus moves to the
Accept button, so we’ll add a Space to press it; and we’re done. That’s all there is to it. As you can see, it’s a very fast and powerful
method to interact with applications, virtual or not. So now you know how to navigate a virtual
machine interface and activate various menus, buttons, and text boxes, with 3 actions: clickText
,clickImage, and TypeInto. To get information out of virtual machines,
there are again, 2 methods: Select&Copy, and ScrapeRelative. Select&Copy is the easiest, but works only
for selectable text, like these textboxes here. Remember that all commands are sent to the
whole virtual machine window, so the action is performed on the active textfield. That’s all. The two resulting actions are, you guessed
it, select & copy. The copied text is available as the output
variable of the second action, Copy selected text. Of course, you should combine it with any
of the 3 input actions to bring the focus to the intended text field. For example, we could simply drag in a Type
action, and simulate pressing the Tab key to move to the next input. And drag in a writeline for testing. Let’s run it. That’s the easiest method of getting text
out of virtual machines. But what do you do when some text that you
want is not in a selectable field? Easy. Remember screen scraping, and how that gave
you the contents of the whole window or container? You aply the same output technique, but relative
to a fixed element. The action is called Scrape Relative, and
it’s here under Screen Scraping. It allows you to scrape just a portion of
an image, relative to an anchor. Let’s get the result of this deposit and also the transaction iD.
We first select the anchor, Total deposit, and then indicate the area to scrape. In the scraping wizard, if we play around
with the setting we can see we get a better result with google OCR and a scale of 3. And continue, then repeat the process to get
the transaction id. Screen scraping>scrape relative>select
the anchor>and indicate the area to scrape. Let’s see what we got. You learned about most of these actions in
the ScreenScraping training, but here’s the newcomer. The ScrapeRelative action is actually a series
of 3 actions: first finding the anchor image. when an image is found, a region identical
to the anchor selection you made, is the actual result;
then, UiPath calculates the region you indicated, based on the anchor image;
and gets the content of that region, using GetOCRText. There’s also a 4th action that resets the
clipping region, because it’s a shared resource, and this avoids interference with other operations. It’s not mandatory to understand what each
of these actions does, but you should know that the scrape relative action generates
these 4 activities. Now all we have to do is add in a couple of
writelines and we’re done. The output of the ScrapeRealtive action is
in the GetOCR actions. Looks like our results are pretty good, except
a couple of spaces we can clean up later. That’s all you need to know to start automating virtual machines. Let’s do a quick recap to see what we learned. Today we focused on image automation, and
its special limitations. The first tool you learned about was ClickImage,
and you saw how to use it to correctly identify and click on certain interface elements, like
radio buttons and text fields. You also got a chance to try out the ClickText
tool, which scans the whole virtual machine for text location, and lets you to click on
any identified text, or relative to it. There was also a very useful technique of
using the keyboard exclusively, to move around an app’s interface using special keys like
Tab, arrows, and other shortcuts. In order to get information OUT of apps, the
first and simplest method was Select and Copy. As its name implies, it simply selects editable
text, and copies it to the UiPath environment, through the clipboard. For more difficult, non-selectable text, you
learned about the Scrape Relative action. It locates fixed elements on the screen, and
then, using OCR, extracts your relevant information. That’s all for image automation, but if
you’d like to know more, have a look at the advanced training. We’ll be covering a few more techniques
related to virtual environments. Goodbye!

Primary Year A Quarter 1 Episode 2: “In The Image of God”

Primary Year A Quarter 1 Episode 2: “In The Image of God”


Hi everyone. It’s Aunt Frenita. Today’s story is called, “In the Image of God.” The memory verse is from Genesis, Chapter 1, Verse 27. It says. God created mankind in His own image. In the image of God, He created him. Male and female. He created them. Today’s message is, God surrounds me with His gifts of love. God created a beautiful and perfect world in six days. Then He created two very special people to enjoy it. Let’s read about it. God had created the sun and the moon. He had created the plants, fish, birds and animals He looked at everything, and saw that it was good. But His creation was incomplete. It was time to create people. Making people would be different from making the creatures and the plants People were to be in God’s own image, and they were to rule over the animals So God formed a man, out of the dust from the ground He carefully made the man’s fingers and toes, his eyes, ears and mouth. When God finished modeling the man, He blew into the man’s nose. The man began to breathe! He opened his eyes and looked around God smiled at the first man, and called him Adam There was a lot to do that day. God told Adam that he was in charge of all the animals. His first job was to name them Adam probably laughed when he saw the monkeys hanging from the trees, chattering to each other. He may have grinned when he saw the elephants with their long trunks and big flapping ears. He probably stopped to pet the shy deer, and play with a black bear God said, “It is not good for Adam to be by himself.” “I will make a mate for him.” Adam had probably noticed that he was by himself He may have noticed that all the animals had a partner They had creatures like themselves to keep them company They could communicate and share things But there had been no one for him. So God made Adam fall into a deep sleep Then he took one of Adam’s ribs and created a woman When Adam woke up, God brought the woman to him Adam was pleased He said, “She was made from my bones and my flesh.” “She will be called woman, because she was taken out of man.” And Adam called the first woman, Eve. At the end of the day, God looked at everything He had created. He saw the plants, the trees, the fish, the animals, and Adam and Eve. It had been a good day. And God said, “This is very good.” This podcast is read by Frenita Buddy-Fullwood for gracelink.net Created and produced by Falvo Fowler Post produced by Faith Toh at Studio Elpizo in Singapore The theme music is by Clayton Kinney Animation and artwork by Diogo Godoy gracelink.net

Best Overhead Camera Setup – Glide Gear OH100 – Super Easy Home Setup

Best Overhead Camera Setup – Glide Gear OH100 – Super Easy Home Setup


Hey packrats! I am so excited to show you guys the ultimate overhead camera rig This is called the Glide Gear OH100
adjustable overhead camera platform and trust me it is awesome. The Glide Gear
overhead camera platform is really simple it’s just two legs with a bar
across the middle which you can adjust in height by unscrewing the knobs on the
sides. I have mine at the tallest setting. In the middle there is a center piece
which has a quarter inch screw so you can screw your camera on anywhere along
the center on either side. I have attached a quick release clamp but keep
in mind that the one I use is discontinued. Anyways just look at how
easy it is to set up and have your camera pointing perfectly downwards
centered on your table. There are also 10 extra quarter inch holes along the top
and bottom of the top bar so you can mount whatever you want on there. As well
since the center piece allows you to mount something to either or both sides
I added a cold shoe mount to mine so I can put something like a monitor here. So
here’s an example of how you might set up this overhead rig but personally I
usually just mount a camera and a monitor. I found that my sony a6600 with the Tamron 17-28 millimeter lens works perfectly on this
setup. Using the setup makes it really easy to get perfect framing because the
legs act as a guide. Here’s a sample clip I filmed with this setup So this product is an absolute joy to
use and when you’re not using it you can break it down into the three pieces and
pack it away really easily. I’ll put a link to the product in the description
and while you’re down there please consider hitting the like button and the
subscribe button alright see ya

Making The Printer Draw Like a Human | Drawing The Rock

Making The Printer Draw Like a Human | Drawing The Rock


In my previous video I drew like a printer But I think to myself why not the other way around so today I’m making the printer draw like a human This is not a challenge for the printer, but it is for me if you would like to know what is going to happen Please continue watching. Thank you don’t know how to act or know how to
adapt to this situation now used to there’s no I’m I’d better let myself
give in to this believe in us no matter what it does to my nose to this know I’m
right in secondly wear these the life force you getting in my frame you can answer my brain you know to me I don’t know who you are
be like a piece of our Walmart the better slow down I not only draw on one
piece of paper but also on multiple i not only draw on one piece of paper pieces on each piece I draw a few small
details and the printer will take care of the rest running down
No the printer will tree draw up what I
drew on a single piece of paper in order to have a complete picture empty space you

СТРАШНАЯ УЧИТЕЛЬНИЦА 3D В РЕАЛЬНОЙ ЖИЗНИ! Scary teacher 3d ПРАНКИ над УЧИЛКОЙ!


Scary 3D teacher in real life! * rent a house for the weekend! I’m a genius! “I won’t buy you a Ipad tablet while the teachers are complaining about you» Now I’ll rent a my house while my parents are away for the weekend and buy it myself… Hello! Yes, I do. Yes, of course, come Wow, so fast! Hurry home! So fast! wow she reminds me of someone This is a scary teacher 3d! What to do, what to do? Exactly, I will not open it, she will think that no one is there and leave Where’s my room? it can’t be… Hint: trap the scary teacher of course! How could I not have guessed at once! You need to get rid of! Where can I get a mousetrap? Hello, mousetrap delivery? I look like a Queen in this dress! well, finally! Great! Now she must be lured out! hi, I’m Professor Baldy! Does the teacher live here with a MOP for her hair?! Who said that? I must have imagined it my favorite dress Did you do that??? He’s got a cheat sheet! Where?! Makar?But what are you doing here? There’s no cheat sheet! Makar Bite!! In this video, no cat was injured! Put like a cat, he tried ))) Well Done To Makar! Hint: Scissors and a teacher’s dress! Oh Yes! A little shortening! Add ventilation! Done! Hey, do you mind if I borrow your dress for a rock concert?? My dress! Don’t touch it!!! Hint: Food + salt + hot sauce to add… I’ll add everything! Yes, hunger is not an aunt!! My hair is ruined! * hairspray * glue !! What a strange varnish, probably expired…need more! There is! Hint: bees and teacher The house is cursed. Damn girl!!! If I find her, she won’t think much of it!!!! * sound of bees Delivery! What’s it? This is a gift from a secret admirer! Who’s that? this… our new neighbor. She’s strange I noticed. The beekeeper campaign. I need something to drink for my allergies.. smuggled… wow, macaroni! no! dad! no! Ksenia!!! Oh!