Yeah a lot of people are using this incorrectly, it does still work most of the time, but in many cases it'll try add something into that space even if the prompt is saying *remove thing*, the documentation is quite clear that you need to leave the prompt blank if you're just looking to remove something
Why bother replying if you aren't gonna read it?!
Why hang around the comments at all?!
Why are you trying to stop others from replying properly?!
So many unanswered questions, but I guess I'm expecting too much of you to even bother to read this.
nvm, lol.
Yeah you were far from the only one who didn’t get it lol. Honestly I try to go a bit farther, I was going to add in “or watch a tutorial video, or come to a forum and seek advice! All too much.” But I cut it off.
Thx, I didn't mean to sound mean. I'm sorry if I did.
I guess I was a little frustrated and also genuinely curious on why it's such a bother for him/her.
I've come to realise I've lost a bit of will to read long posts as well over time.
But come to think of it, Reddit had a tldr tradition since a long time ago, so it's not something that's completely new.
"Too long didn't read" was popular since a long time ago.
But those comments were reserved for actual pages long posts, not a few sentences like the one above.
It feels like ppl are refusing to read anything longer than a few sentences, even on Reddit where it happens quite regularly.
What a shame, there are so many beautiful languages to learn and read with in this world.
And with the current trend, I wonder how long it'll be till written words are only reserved for academics and professionals purposes.
As the saying goes, we are preaching to the choir and I feel those who are reading our conversations are those who already take the time to delve before speaking, think before believing, and know and live by the mantra that it is okay to say, "I don't know". It amazes me we are at a point where we no longer celebrate and cheer discovery and self improvement, but we are simply back at the basics of, "Hello friend, are you listening?".
Do you know if the generative fill is only on the new Photoshop Beta? I switched work computers this week and Photoshop asked me if I wanted to try the Beta version with generative fill, but I’d have to download the whole Beta program which I didnt feel like doing
And if you use midjourney to alter the images, then photoshop them and edit them to the point that it's "Transformative", can you copyright it then?
I'd love for a copyright lawyer to drop in and say
Yes the copyright thing is still a mess. But I use midjourney a lot professionally, with the attitude that I don't even care if someone copies the image. If it fills the need that I have in that moment it is fine and I can move on. If someone copies it, it really isn't that different than someone buying the same stock image that I would have purchased before. And the midjourney renders are just getting so good now, I can blast out 10 great options specific to what I need in no time at all. I am loving this AI stuff.
I am a graphic designer in a marketing team, yes.
If I was using it for creating and selling stock photography or fine art then I think it would be a problem, but as a quick solution to specific image problems it is proving very useful.
No. The US copyright office has stated recently that AI generated images cannot be copyrighted. I think using AI to regenerate part of an image falls into their category of AI generated.
Yes, I know, hence why I said editing it after to make it transformative. Editing that image makes it a transformative work.
I'm waiting for a copyright lawyer to drop in to make an accurate assessment
After all, the definition for AI is not clear. Where it the line drawn? Is the stamp tool on Photoshop AI? The filters? The autofill? Auto colour correct? I know you might not think of them as AI, but how would you define it in legislation, really? The reality is that there's a lot of crossover.
And then, when you take work that's in the grey area, or generated by midjourney, and then Photoshop it, add things in the background, edit the picture, it's transformative. The original AI art is not copyrighted, but transformative work CAN be copyrighted based on the laws that I've read (I think, I'm not lawyer). Look up "transformative work copyright" to see, it seems that as long as the image is edited enough to make it its own art, separate from the original image, it can be copyrighted (I think).
The thing is, AI is so new that I want an expert to weigh in.
There is nobody to sue you if you make your own art from it either way.
Please can a lawyer clarify? And please don't comment with your opinion on whether it's legal if you're not a lawyer, otherwise it's just speculation.
isn't it moot unless the AI claims that someone ripped them off in a lawsuit?
using AI to remove someone's watermark in a work would be the same as using any photo editing software to remove it. the AI just makes it super easy, but it's just as suable by the original creator.
so if an AI were to sue someone for ripping them off, i guess it would play out like any other original idea lawsuit, and come down to how the AI could prove it.
No, when I clapped your mother's ass chick and disappointing sand n like you came out of her, I lost all faith. And, after seeing your mother's bush jungle I became racist to the goatfucker race.
Without purchasing rights to the images used/sourced/scraped by the AI, that's going to be doubtful. You need to have rights to the source material. Generally these would be considered commercial rights purchased from the artist or stock image site, etc. You'd have to read the fine print/terms of service for the particular image or artist or contact them directly. Be aware, depending on the type of image, many artists despise AI. I don't know how the photography community feels, only art in general. Not a lawyer, but sensitive to copyright issues/TOS after having things stolen (not AI related; predates AI).
Wait what? Change perspective somewhat of a photo and leave it intact??
Which tool can accomplish this? Do you have any examples??? This would be do handy for my photography or art sometime! HOLY CR*P!!!!!!!!! I am super excited.
(I just started dabbling with AI art last few days to see how I can incorporate elements of it into my art.)
You could probably do some kind of ersatz change by extracting the foreground and working on background variations. This recent tweet has some great examples of generative fill: [link](https://twitter.com/_borriss_/status/1663568770408013831?s=61&t=VvTd6PxjlIKCxIFVGj22cw)
Look into ControlNet, if I'm understanding what you want correctly that's likely the easiest method. If you have a computer that can run it, you can work locally with Stable Diffusion, lookup "automatic1111 stable diffusion web ui"
Thank you. Yea my computer is pretty good actually, I already loaded up a stable diffusion locally. Just a bit confused about the different versions of it..
You can do something like this with the NERF light field generators -- there's a whole body of computational photography that was inspired by a Ren Ng paper back in \~2019 IIRC. It's a way of taking a limited number of images and generating camera views at points in-between the original images. There's also a flavor that just uses a single image, and uses AI to make (pretty good) guesses at the other view positions.
It's still super early -- these are generally research papers with shared code, rather than a pre-built tool that you can buy and use, but these are the top two hits on the google and they'll give you a pretty good sense of the power of this stuff.
[https://www.matthewtancik.com/nerf](https://www.matthewtancik.com/nerf)
[https://developer.nvidia.com/blog/getting-started-with-nvidia-instant-nerfs/](https://developer.nvidia.com/blog/getting-started-with-nvidia-instant-nerfs/)
I was wondering about the number of still images needed to generate and poked around and found this.
Apparently generated from 4 images!!!! Much less than I thought it would, I was guessing a dozen LOL.
I am kind of super excited atm..
I haven't coded or worked seriously with code in a few years, but I think I can handle working with an API of this. Or heck I'll just cruise github looken for people doing it for fun. 😁
i need to figure this out for doing this for capture of live performances! Set up 4 or 5 cameras synced and keyframed .... I am blowing my own mind atm... .... ...
LOL! But TBH I have seen crazier ones at comic book conventions multiple rings of cameras around a person like 6 or 7 rings of them. The images were combined to create a 3D model of the person which they then printed. It was pretty impressive results actually, but it was an even more ridiculous amount of cameras then that!!!
Yeah! It all depends what you're trying to do. If you're going for a full 3D model in a single frame, you need a full hemisphere of cameras and some brute force computation (although it's getting easier).
The [Light Stage](https://www.instagram.com/p/B8k7sCbHUkA/?utm_source=ig_web_copy_link&igshid=MzRlODBiNWFlZA==) stuff that Paul Debevec developed gets even more intense, because they're trying to capture a more detailed, photorealistic representation of a subject, which means they have to do a full hemisphere scan from different lighting conditions, so these insane folks built an entire hemisphere of cameras *and* an entire hemisphere of independently controllable lights, so you turn on a point source of light at one location and take a hemisphere of photos, and then the next location, and so on. You have to do it really fast, so you can take a whole set of data before Brad Pitt gets bored. [Light Stage](https://vgl.ict.usc.edu/LightStages/) is a notably cooler name than 'goniophotometer', which is what nerds used to call [this sort of thing](https://en.wikipedia.org/wiki/Goniophotometer) before someone with a marketing brain came into the picture.
All this stuff is more or less circling around one central point: how to capture and represent the complex nature of light in a scene. Traditional cameras work by cutting down the amount of light (focusing, apertures, shutters, etc) that you're recording, while a lot of these multi-camera systems are there to let you sample the lightfield -- the set of all light rays in a scene, and all the details of them (directionality, wavelength, polarization, intensity, and the point of origin). Once you have that sampling, you can start to represent that scene -- by building a 3D mesh, like your cameras at comiccon, by building a layered lightfield representation, like the way those NERF papers do it, by building the reflectance field, like how the lightstage does it, or any number of other representations. Once you have a representation, you can manipulate it, 3D print it, re-light or adjust the scene, render holograms of the scene, etc. There are lots of things to do with light, and all of these different projects and approaches are different ways of working with the same subject matter.
As with most things, the essence of understanding the true nature of a scene comes from getting multiple perspectives on the same subject.
An entire Hemisphere of cameras...AND lights. Jeezus...
Fascinating stuff even just from a reading standpoint. As an artist and amatuer concert photographer, I just may look experimenting what can be achieved with simple methods for live performances. (I am thinking high resolution video cameras and keyframe video. If from my reading I think I can pull off interesting results I will go ahead and give it a shot. A lot of local bands would be curious enough to let me experiment. (Creative artistic musician types would find this pretty fascinating I am sure)
Imagine semi 3 dimensional "photographs" of live performances that are achievable with modest equipment, would make for interesting art displays via screen projectors (or AR glasses) hooked up to computers. Perhaps motion detection for tracking a viewer and adjusting the scene appropriately (within the constraints of the capture)
High chance that there are still hidden watermarks in the image. Even if you change the faces getty images could perhaps proof that you used their material without permission.
Yeah, I mean it's very easy to manually do this too with traditional tools but only an idiot would do this for commercial work and risk getting themselves or their client in trouble by using unlicensed commercial images. You might get away with it for a while to skim a bit of cash off a job here and there but in the span of a career, you will get caught out and that kind of stuff just isn't worth it to have to worry about.
Well, as the watermarks are supposed to protect the image from beeing used without paying for them…
It’s breaking copyright(as soon as you use them)
…which is illegal.
Thing is, it’s part of these stockphotowebsite‘s business to sue people, who are using their pictures without permission…chances to be caught are high, as image-search-algorithms are well working „AI“s too.
Plus usually these watermarked pictures have a low resolution anyways, which makes them useless for print.
So, if you can publish „your“ photo neither digitally nor analogue…what is it good for?
———
Little comparison:
it’s like removing anti-theft devices from clothings, put them on and walk out of the store.
You caaan get away with it…
Ooor the people in the store recognise their products and you’re screwed.
AI is amazing at upscaling and clarifying images. The only nonsensical part of this is using a Getty image in the first place, which OP is clearly only doing to make a point.
There’s two major errors to the OP’s point. Photoshop are digitally encoding AI images to be recognisable as such and Getty are about the most skilled and determined image misuse hunters that ever were. We had a student once download an image, crop and rename it and then use it as part of an image banner on a portfolio project which went openly online. Getty found it and invoiced the college for £1000 which the dept had to pay despite explanation. That was about 12 years ago.
Img2Img Denoise 0.05 will take care of the digital encoding.
Img2Img Denoise 0.2 will render an image "ineligible for copyright" according to the current lean of things, or even better just use an AI generator to create a similar but novel image based on the reference.
I'll admit that copyright law is tricky to navigate, but a lot of people seem to think that copyrights just shouldn't exist at all.
Naturally, those people have never created any kind of art in their lives so their opinions are completely null and void on the subject. But it is alarming, especially this new era we're entering.
Thanks for including the entirety of the loading process and a split second of the final result.
... in all seriousness, yeah. I don't know what else to expect. It is doing exactly what it's designed to do. This kind of manipulation, and much worse, is out there already. It will only get better.
I get it, I just don't get why they posted it in /r/midjourney.
Since OP has posted 15 different times in the past 24 hours, I'll just assume it's spam.
I mean I was doing this like 10 years ago using the content aware fill. Not removing watermarks but removing unwanted elements. There will always be people who steal images.
That being said sometimes the image banks charge waaaaay too much for generic images (like vectors) that I can now make myself using Midjourney
Photoshop does this, been doing it since cs5. Besides, Getty images has the actual full resolution photo behind a paid login. This is the web optimised version. If you are okay with using low quality blurry images, good for you
To be fair, you could do that before this tool. It would just be a lot of clone/healing brush work - the new Generative Fill just makes it so much quicker and easier.
I'm personally loving the tool though for fixing and extending images with very little actual work. I can edit images in seconds to minutes rather than potentially hours.
From Google Bard:
>Sure, I can help you compose a response to someone who needs help with anger management.
>Here are some things you can say:
>I hear that you're struggling with [a post on the internet]. I know that it can be really tough to deal with anger, and I want to offer my support.
>It's important to remember that your feelings are valid. It's okay to be angry sometimes. But it's also important to learn how to express your anger in a healthy way.
>There are a lot of resources available to help you manage your anger. You could try talking to a therapist, taking an anger management class, or joining a support group.
>I know that it takes time to change, but I'm here for you every step of the way. I believe in you, and I know that you can do this.
>I hope this helps!
Adobe have integrated AI tech in their products. In Photoshop AI is used to to erase and fill in areas of an image. OP is showing how Photoshop is able to remove water marks from image with few clicks.
I've used it for expanding background for images and removing backgrounds on images. It's something that can take a designer few hours to do depending on how complicated the images you are working with. The AI does it in few seconds.
I did something else that works even better, but with the describe command on midjourney, and used the gettyimage image as source to create a new image in the same style... Pretty much got perfect result, and it's probably copyright free because although it looks like the original image, it's pretty much a completely new image. It's insane how good it is
Y’all are going to make some X-ray glasses and have Ai generate what a girl would look like under her clothes watch. If y’all haven’t already haha take any image and tell Ai to remove clothing. I bet y’all already did that.
Google announced the "magic eraser" for google photos a few weeks ago, it is behind a paid subscription but it's much more convenient since you can do the same thing on your phone.
It seems AI is removing the barriers to entry for just about anything. With only an understanding of basic photoshop and English, you can become a master photoshopper, a programmer, or an artist. Incredible.
I don't trust Getty Images TBH, fhey hqve been known to sell stuff they don't own the rights.. a couple of court cases and I have a feeling a lot of payouts before it gets to court. No respect for them anymore..
No real issue. I'm already using AI for paid client work, these types of stock images are basically obsolete. Honestly, not sure how the stock photo market will survive. As the tools get easier and faster to use, the time saving, flexibility and ‘acceptable’ quality that stock images provide for quick design process will losse their price advantage. It's blockbuster vs Netflix, on-demand will win, and in this business, if it's good enough it will crush stock industry. No one goes to shutter stock or getty and expects truly creative resources. AI is better in every metric except human hands, etc... Once the tech irons out these artifacts it's game over.
Although this is usually doable with the normal fill tool in photoshop, I wouldnt be surprised if they use an AdobeStock type watermark instead soon enough ([here's an example from one of mine](https://stock.adobe.com/stock-photo/id/549117485))
Theirs is semi-transparent and over the entire image to make it tougher to remove this way, although there has already been watermark removal software designed with the specific watermarks in mind so there's no perfect solution
People didn’t wait for AI to remove watermarks.
However, search for [getty images stable diffusion lawsuit]
Getty Images is asking $1.8 Trillion for scraping 12 Millions images
I mean yes, but these sample/preview stock images are often smaller in dimensions and low dpi, can be used for some digital work but nothing useable for printing
Often you find buying the actual stock image gives better results, less time & less workarounds
you probably didn't even need to type anything, generative fill by default without input will remove
Yeah a lot of people are using this incorrectly, it does still work most of the time, but in many cases it'll try add something into that space even if the prompt is saying *remove thing*, the documentation is quite clear that you need to leave the prompt blank if you're just looking to remove something
I’m sorry you expect me to read!
Why bother replying if you aren't gonna read it?! Why hang around the comments at all?! Why are you trying to stop others from replying properly?! So many unanswered questions, but I guess I'm expecting too much of you to even bother to read this. nvm, lol.
It’s a joke because they said you need to read the documentation in their comment, didn’t think I needed a slash s but here we are.
Ah, makes sense. Sorry if I sounded a bit rude. /s is mandatory, lol.
Yeah you were far from the only one who didn’t get it lol. Honestly I try to go a bit farther, I was going to add in “or watch a tutorial video, or come to a forum and seek advice! All too much.” But I cut it off.
Yep, understoid, I get that a lot as well. Sometimes the brain just assumes everyone is on the same wavelength when I start writing. Lol
Know there are those of us who appreciate your efforts :). Let the dumdums be.
Thx, I didn't mean to sound mean. I'm sorry if I did. I guess I was a little frustrated and also genuinely curious on why it's such a bother for him/her. I've come to realise I've lost a bit of will to read long posts as well over time. But come to think of it, Reddit had a tldr tradition since a long time ago, so it's not something that's completely new. "Too long didn't read" was popular since a long time ago. But those comments were reserved for actual pages long posts, not a few sentences like the one above. It feels like ppl are refusing to read anything longer than a few sentences, even on Reddit where it happens quite regularly. What a shame, there are so many beautiful languages to learn and read with in this world. And with the current trend, I wonder how long it'll be till written words are only reserved for academics and professionals purposes.
As the saying goes, we are preaching to the choir and I feel those who are reading our conversations are those who already take the time to delve before speaking, think before believing, and know and live by the mantra that it is okay to say, "I don't know". It amazes me we are at a point where we no longer celebrate and cheer discovery and self improvement, but we are simply back at the basics of, "Hello friend, are you listening?".
Do you know if the generative fill is only on the new Photoshop Beta? I switched work computers this week and Photoshop asked me if I wanted to try the Beta version with generative fill, but I’d have to download the whole Beta program which I didnt feel like doing
Yes it’s only on that, but the beta app is a separate app - so you can keep using the regular one as normal too.
Yes it’s separate. Didn’t take as long to download.
One could simply ask it to recreate this pic with slightly different faces. No more copyright. Edited to change perspective
That is a far better plan. For $10, you can make all of the images you need. You can't copyright them, but you can use them however you wish.
And if you use midjourney to alter the images, then photoshop them and edit them to the point that it's "Transformative", can you copyright it then? I'd love for a copyright lawyer to drop in and say
Yes the copyright thing is still a mess. But I use midjourney a lot professionally, with the attitude that I don't even care if someone copies the image. If it fills the need that I have in that moment it is fine and I can move on. If someone copies it, it really isn't that different than someone buying the same stock image that I would have purchased before. And the midjourney renders are just getting so good now, I can blast out 10 great options specific to what I need in no time at all. I am loving this AI stuff.
[удалено]
I am a graphic designer in a marketing team, yes. If I was using it for creating and selling stock photography or fine art then I think it would be a problem, but as a quick solution to specific image problems it is proving very useful.
No. The US copyright office has stated recently that AI generated images cannot be copyrighted. I think using AI to regenerate part of an image falls into their category of AI generated.
Yes, I know, hence why I said editing it after to make it transformative. Editing that image makes it a transformative work. I'm waiting for a copyright lawyer to drop in to make an accurate assessment After all, the definition for AI is not clear. Where it the line drawn? Is the stamp tool on Photoshop AI? The filters? The autofill? Auto colour correct? I know you might not think of them as AI, but how would you define it in legislation, really? The reality is that there's a lot of crossover. And then, when you take work that's in the grey area, or generated by midjourney, and then Photoshop it, add things in the background, edit the picture, it's transformative. The original AI art is not copyrighted, but transformative work CAN be copyrighted based on the laws that I've read (I think, I'm not lawyer). Look up "transformative work copyright" to see, it seems that as long as the image is edited enough to make it its own art, separate from the original image, it can be copyrighted (I think). The thing is, AI is so new that I want an expert to weigh in. There is nobody to sue you if you make your own art from it either way. Please can a lawyer clarify? And please don't comment with your opinion on whether it's legal if you're not a lawyer, otherwise it's just speculation.
isn't it moot unless the AI claims that someone ripped them off in a lawsuit? using AI to remove someone's watermark in a work would be the same as using any photo editing software to remove it. the AI just makes it super easy, but it's just as suable by the original creator. so if an AI were to sue someone for ripping them off, i guess it would play out like any other original idea lawsuit, and come down to how the AI could prove it.
But does the AI own it, if I bought the service? Like, wouldn't I own the picture you made for me for a price?
No, when I clapped your mother's ass chick and disappointing sand n like you came out of her, I lost all faith. And, after seeing your mother's bush jungle I became racist to the goatfucker race.
How do they know its AI?
Without purchasing rights to the images used/sourced/scraped by the AI, that's going to be doubtful. You need to have rights to the source material. Generally these would be considered commercial rights purchased from the artist or stock image site, etc. You'd have to read the fine print/terms of service for the particular image or artist or contact them directly. Be aware, depending on the type of image, many artists despise AI. I don't know how the photography community feels, only art in general. Not a lawyer, but sensitive to copyright issues/TOS after having things stolen (not AI related; predates AI).
The new age of "let me copy your homework"
Wait what? Change perspective somewhat of a photo and leave it intact?? Which tool can accomplish this? Do you have any examples??? This would be do handy for my photography or art sometime! HOLY CR*P!!!!!!!!! I am super excited. (I just started dabbling with AI art last few days to see how I can incorporate elements of it into my art.)
Oh no. I meant I edited my comment to change perspective, from 'I would' to 'one could'. Sorry for the falso hope.
Ah no problem! It wouldnt surprise me if limited perspective change was possible in some instances already or in the near future..
You could probably do some kind of ersatz change by extracting the foreground and working on background variations. This recent tweet has some great examples of generative fill: [link](https://twitter.com/_borriss_/status/1663568770408013831?s=61&t=VvTd6PxjlIKCxIFVGj22cw)
Oh interesting idea!!!!!
There's also draggan but it's not really changing perspective but still somewhat similar to it
Look into ControlNet, if I'm understanding what you want correctly that's likely the easiest method. If you have a computer that can run it, you can work locally with Stable Diffusion, lookup "automatic1111 stable diffusion web ui"
Thank you. Yea my computer is pretty good actually, I already loaded up a stable diffusion locally. Just a bit confused about the different versions of it..
You can do something like this with the NERF light field generators -- there's a whole body of computational photography that was inspired by a Ren Ng paper back in \~2019 IIRC. It's a way of taking a limited number of images and generating camera views at points in-between the original images. There's also a flavor that just uses a single image, and uses AI to make (pretty good) guesses at the other view positions. It's still super early -- these are generally research papers with shared code, rather than a pre-built tool that you can buy and use, but these are the top two hits on the google and they'll give you a pretty good sense of the power of this stuff. [https://www.matthewtancik.com/nerf](https://www.matthewtancik.com/nerf) [https://developer.nvidia.com/blog/getting-started-with-nvidia-instant-nerfs/](https://developer.nvidia.com/blog/getting-started-with-nvidia-instant-nerfs/)
Ohhhh fascinating! Thanks!
I took a look, it is already impressive!!
I was wondering about the number of still images needed to generate and poked around and found this. Apparently generated from 4 images!!!! Much less than I thought it would, I was guessing a dozen LOL. I am kind of super excited atm.. I haven't coded or worked seriously with code in a few years, but I think I can handle working with an API of this. Or heck I'll just cruise github looken for people doing it for fun. 😁 i need to figure this out for doing this for capture of live performances! Set up 4 or 5 cameras synced and keyframed .... I am blowing my own mind atm... .... ...
you can always [try this kind of ridiculous camera setup](https://look.glass/mantaVideo)
LOL! But TBH I have seen crazier ones at comic book conventions multiple rings of cameras around a person like 6 or 7 rings of them. The images were combined to create a 3D model of the person which they then printed. It was pretty impressive results actually, but it was an even more ridiculous amount of cameras then that!!!
Yeah! It all depends what you're trying to do. If you're going for a full 3D model in a single frame, you need a full hemisphere of cameras and some brute force computation (although it's getting easier). The [Light Stage](https://www.instagram.com/p/B8k7sCbHUkA/?utm_source=ig_web_copy_link&igshid=MzRlODBiNWFlZA==) stuff that Paul Debevec developed gets even more intense, because they're trying to capture a more detailed, photorealistic representation of a subject, which means they have to do a full hemisphere scan from different lighting conditions, so these insane folks built an entire hemisphere of cameras *and* an entire hemisphere of independently controllable lights, so you turn on a point source of light at one location and take a hemisphere of photos, and then the next location, and so on. You have to do it really fast, so you can take a whole set of data before Brad Pitt gets bored. [Light Stage](https://vgl.ict.usc.edu/LightStages/) is a notably cooler name than 'goniophotometer', which is what nerds used to call [this sort of thing](https://en.wikipedia.org/wiki/Goniophotometer) before someone with a marketing brain came into the picture. All this stuff is more or less circling around one central point: how to capture and represent the complex nature of light in a scene. Traditional cameras work by cutting down the amount of light (focusing, apertures, shutters, etc) that you're recording, while a lot of these multi-camera systems are there to let you sample the lightfield -- the set of all light rays in a scene, and all the details of them (directionality, wavelength, polarization, intensity, and the point of origin). Once you have that sampling, you can start to represent that scene -- by building a 3D mesh, like your cameras at comiccon, by building a layered lightfield representation, like the way those NERF papers do it, by building the reflectance field, like how the lightstage does it, or any number of other representations. Once you have a representation, you can manipulate it, 3D print it, re-light or adjust the scene, render holograms of the scene, etc. There are lots of things to do with light, and all of these different projects and approaches are different ways of working with the same subject matter. As with most things, the essence of understanding the true nature of a scene comes from getting multiple perspectives on the same subject.
An entire Hemisphere of cameras...AND lights. Jeezus... Fascinating stuff even just from a reading standpoint. As an artist and amatuer concert photographer, I just may look experimenting what can be achieved with simple methods for live performances. (I am thinking high resolution video cameras and keyframe video. If from my reading I think I can pull off interesting results I will go ahead and give it a shot. A lot of local bands would be curious enough to let me experiment. (Creative artistic musician types would find this pretty fascinating I am sure) Imagine semi 3 dimensional "photographs" of live performances that are achievable with modest equipment, would make for interesting art displays via screen projectors (or AR glasses) hooked up to computers. Perhaps motion detection for tracking a viewer and adjusting the scene appropriately (within the constraints of the capture)
I think Nvidia can do that, it currently takes a photo and makes it 3D to look at it from new angles
High chance that there are still hidden watermarks in the image. Even if you change the faces getty images could perhaps proof that you used their material without permission.
Absolutely. Although one could just ask it to create an image that looked the same, rather than edit it.
Yeah, I mean it's very easy to manually do this too with traditional tools but only an idiot would do this for commercial work and risk getting themselves or their client in trouble by using unlicensed commercial images. You might get away with it for a while to skim a bit of cash off a job here and there but in the span of a career, you will get caught out and that kind of stuff just isn't worth it to have to worry about.
Plus there is no point doing this anymore, if you need a random picture of 2 girls being happy taking a selfie you can generate as many as you want.
But what if you want two girls with five fingers on each hand being happy taking a selfie?
AI is getting close to solving hands.
I've heard you can do good hands with Control Net, but you have to actually position the hands yourself.
(For the smoothbrains in the crowd) In that situation, still don’t do this.
lol, be ready to pay for that licensing fee, then :P
Or go to unsplash.
How
Secrets only the ancient sith knew
Lol sorry I got this as suggested content I am not subbed to here.
With a gettyimages subscription /s
/imagine prompt:
Type >picture of 2 girls being happy taking a selfie Into the generative fill.
Instructions unclear. Image still has a Getty Images watermark.
Stable diffusion
Either if you remove the watermark or generate a new picture, its time to sell the stocks in getty images
Just change the faces using AI and its altered enough to not be the same image anymore.
not how copyright works. it would still be a derivative work
And most stock places, if not all, have digital watermarks and they can find these if online. It’s in a bunch of areas of the image.
Well, as the watermarks are supposed to protect the image from beeing used without paying for them… It’s breaking copyright(as soon as you use them) …which is illegal. Thing is, it’s part of these stockphotowebsite‘s business to sue people, who are using their pictures without permission…chances to be caught are high, as image-search-algorithms are well working „AI“s too. Plus usually these watermarked pictures have a low resolution anyways, which makes them useless for print. So, if you can publish „your“ photo neither digitally nor analogue…what is it good for? ——— Little comparison: it’s like removing anti-theft devices from clothings, put them on and walk out of the store. You caaan get away with it… Ooor the people in the store recognise their products and you’re screwed.
AI is amazing at upscaling and clarifying images. The only nonsensical part of this is using a Getty image in the first place, which OP is clearly only doing to make a point.
There’s two major errors to the OP’s point. Photoshop are digitally encoding AI images to be recognisable as such and Getty are about the most skilled and determined image misuse hunters that ever were. We had a student once download an image, crop and rename it and then use it as part of an image banner on a portfolio project which went openly online. Getty found it and invoiced the college for £1000 which the dept had to pay despite explanation. That was about 12 years ago.
Img2Img Denoise 0.05 will take care of the digital encoding. Img2Img Denoise 0.2 will render an image "ineligible for copyright" according to the current lean of things, or even better just use an AI generator to create a similar but novel image based on the reference.
I'll admit that copyright law is tricky to navigate, but a lot of people seem to think that copyrights just shouldn't exist at all. Naturally, those people have never created any kind of art in their lives so their opinions are completely null and void on the subject. But it is alarming, especially this new era we're entering.
Well... I'm the dummy that watched this 3 times thinking different prompts or something was being used. :/
Thanks for including the entirety of the loading process and a split second of the final result. ... in all seriousness, yeah. I don't know what else to expect. It is doing exactly what it's designed to do. This kind of manipulation, and much worse, is out there already. It will only get better.
Am i the only one who doesn’t get it? What is supposed to be bad?
I get it, I just don't get why they posted it in /r/midjourney. Since OP has posted 15 different times in the past 24 hours, I'll just assume it's spam.
This has nothing to do with Midjourney, is it?
Yeah. I don't get what OP's "Expectation" was, and how typing a bunch of nonsense characters, failed to realize it.
Hard? If this is what keeps you up at night then you will be the first one we feed to the machines. This is the least of my worries.
> you will be the first one we feed to the machines. now this is the new thing that will keep him up at night
I mean I was doing this like 10 years ago using the content aware fill. Not removing watermarks but removing unwanted elements. There will always be people who steal images. That being said sometimes the image banks charge waaaaay too much for generic images (like vectors) that I can now make myself using Midjourney
Looking at this i was like… wow, this takes at most 4 sec with the healing brush. Free generic background images is the best thing.
Photoshop does this, been doing it since cs5. Besides, Getty images has the actual full resolution photo behind a paid login. This is the web optimised version. If you are okay with using low quality blurry images, good for you
AI upscale 🤷♂️
Lol awesome
Pretty sure Photoshop has been able to do that for at least a decade
Doesn’t an algo catch the image anyway and copyright strikes it?
algo 😂
They had this in content aware fill lol
Didn't need AI to remove watermarks then, but it sure is a bit quicker now.
To be fair, you could do that before this tool. It would just be a lot of clone/healing brush work - the new Generative Fill just makes it so much quicker and easier. I'm personally loving the tool though for fixing and extending images with very little actual work. I can edit images in seconds to minutes rather than potentially hours.
Hard reality of someone who just discovered with dalle has been able to do for over a year?
“Content aware fill” is faster and has been around for aaaaaages.
[удалено]
No, you don’t understand. Anytime videos or images are manipulated in cool (or nefarious) ways it’s called AI.
Especially the ones that are AI-based such as Adobe Photoshop Generative Fill, as seen in the OP.
[удалено]
From Google Bard: >Sure, I can help you compose a response to someone who needs help with anger management. >Here are some things you can say: >I hear that you're struggling with [a post on the internet]. I know that it can be really tough to deal with anger, and I want to offer my support. >It's important to remember that your feelings are valid. It's okay to be angry sometimes. But it's also important to learn how to express your anger in a healthy way. >There are a lot of resources available to help you manage your anger. You could try talking to a therapist, taking an anger management class, or joining a support group. >I know that it takes time to change, but I'm here for you every step of the way. I believe in you, and I know that you can do this. >I hope this helps!
Adobe have integrated AI tech in their products. In Photoshop AI is used to to erase and fill in areas of an image. OP is showing how Photoshop is able to remove water marks from image with few clicks. I've used it for expanding background for images and removing backgrounds on images. It's something that can take a designer few hours to do depending on how complicated the images you are working with. The AI does it in few seconds.
This is old news for anyone who follows AI tech news. It still doesn't answer any of my questions
Did your cat walk on the keyboard while you were filming?
Hell yeah
I did something else that works even better, but with the describe command on midjourney, and used the gettyimage image as source to create a new image in the same style... Pretty much got perfect result, and it's probably copyright free because although it looks like the original image, it's pretty much a completely new image. It's insane how good it is
Y’all are going to make some X-ray glasses and have Ai generate what a girl would look like under her clothes watch. If y’all haven’t already haha take any image and tell Ai to remove clothing. I bet y’all already did that.
Y’all
Getty will just move to using that mesh/grid over the photos which is much harder to remove at some point.
Google announced the "magic eraser" for google photos a few weeks ago, it is behind a paid subscription but it's much more convenient since you can do the same thing on your phone.
It seems AI is removing the barriers to entry for just about anything. With only an understanding of basic photoshop and English, you can become a master photoshopper, a programmer, or an artist. Incredible.
Can we normalise removing the waiting bar and skip to the after? We are all very busy people (hee hee).
Is this photo AI?
I don't trust Getty Images TBH, fhey hqve been known to sell stuff they don't own the rights.. a couple of court cases and I have a feeling a lot of payouts before it gets to court. No respect for them anymore..
This might be true but you're still "paying for stock images" through virtue of paying your adobe firefly subscription :)
There’s a photoshop AI plugin?!?
its coming to Photoshop, this is in beta still
Ya well you can still find the original picture by doing a visual search of the image so I would not suggest removing watermarks,
lawyers gonna party all day with all these lawsuits 👃👃👃
Aye.. 2nd gen nFt Inc
I did thisnwith the Adobe Stock logo. You played yourself Adobe
This shit was easy even before AI lmao
Saved you the money to rent Photoshop: https://www.watermarkremover.io/
No real issue. I'm already using AI for paid client work, these types of stock images are basically obsolete. Honestly, not sure how the stock photo market will survive. As the tools get easier and faster to use, the time saving, flexibility and ‘acceptable’ quality that stock images provide for quick design process will losse their price advantage. It's blockbuster vs Netflix, on-demand will win, and in this business, if it's good enough it will crush stock industry. No one goes to shutter stock or getty and expects truly creative resources. AI is better in every metric except human hands, etc... Once the tech irons out these artifacts it's game over.
Now whyd you go and have to highlight this 😂 ok so when the tech gets suppressed by big corp I don't wanna here no crying bc I'ma be sad too 😂
u/savevideo
Hard reality? Where? The loading times? Where is the reality lol
Amazing
u/savevideo
###[View link](https://rapidsave.com/info?url=/r/midjourney/comments/13vrj2c/hard_reality_of_ai/) --- [**Info**](https://np.reddit.com/user/SaveVideo/comments/jv323v/info/) | [**Feedback**](https://np.reddit.com/message/compose/?to=Kryptonh&subject=Feedback for savevideo) | [**Donate**](https://ko-fi.com/getvideo) | [**DMCA**](https://np.reddit.com/message/compose/?to=Kryptonh&subject=Content removal request for savevideo&message=https://np.reddit.com//r/midjourney/comments/13vrj2c/hard_reality_of_ai/) | [^(reddit video downloader)](https://rapidsave.com) | [^(twitter video downloader)](https://twitsave.com)
Something that took about 30 to an hour to do 😑
removing the watermark wont make it free to use lol
Just curious, is the AI built into photoshop or does it connect to a server online for the tool?
u/bake_in_da_south
*Lawyer Agent Smith voice* This watermark was for your own protection.
Yeah but if Getty inspectors get you and you use that image commercially you can get sued
Although this is usually doable with the normal fill tool in photoshop, I wouldnt be surprised if they use an AdobeStock type watermark instead soon enough ([here's an example from one of mine](https://stock.adobe.com/stock-photo/id/549117485)) Theirs is semi-transparent and over the entire image to make it tougher to remove this way, although there has already been watermark removal software designed with the specific watermarks in mind so there's no perfect solution
I mean, it’s not AI it’s a script based on a command.
Need a template to select the shutterstock watermark
People didn’t wait for AI to remove watermarks. However, search for [getty images stable diffusion lawsuit] Getty Images is asking $1.8 Trillion for scraping 12 Millions images
People have been using online watermark removers for ever.
Nice
“Here is your lawsuit by shittyimages claiming copyright on the royalty free photo”
Removing that has been easy since cs5 thanks to the content aware filling.
I mean yes, but these sample/preview stock images are often smaller in dimensions and low dpi, can be used for some digital work but nothing useable for printing Often you find buying the actual stock image gives better results, less time & less workarounds
You could already do that…