• I want to thank all the members that have upgraded your accounts. I truly appreciate your support of the site monetarily. Supporting the site keeps this site up and running as a lot of work daily goes on behind the scenes. Click to Support Signs101 ...

Thoughts?

WildWestDesigns

Active Member
Most on here (along with my other opinions, not really shy about it) know my disdain for AI. And here we have this article ("article" is an embedded hyperlink) about an interview with Adobe execs saying using AI or be at risk of being left behind. It sounds more like trying to puff something up that's easier to add in their long in the tooth software that is dubious at best to include. But I just may be a relic of a bygone era.
 

Saturn

Your Ad Here!
("article" is an embedded hyperlink)
Since signs101 has goofy defaults that doesn't include making links stand out, I usually hit the 'Text color' option and make my links blue.

AI in apps might be getting oversold, but there's a lot you can do with models like Claude and ChatGPT on your own right now—If you're a curious person.

I can't write any real code, but I can get AI to write me Illustrator scripts that I wouldn't normally have easy (or free) access to otherwise. I also use it to compare text documents for changes/variations. If I was smarter, and made use of a lot of lists, or made it a point to heavily plan things out, they're great for that as well.

I'm not entirely optimistic that it will "make life better" overall for humanity, but I do think it's pretty exciting technology. My take is that it will will get to the point that it's so good, that the difference between the haves and the have-nots will be if you can afford the $500-1000+ subscription per month. I also think it's advancing quicker than most people realize...
 
Last edited:

WildWestDesigns

Active Member
AI in apps might be getting oversold, but there's a lot you can do with models like Claude and ChatGPT on your own right now—If you're a curious person.

I have gone thru several iterations of tooling like this, it always has tended to be that the easier stuff could be replaced with tooling like that, however, when starting to get to the more advanced use cases, it tends to fall and fall hard. It would actually take more time to shoehorn what they spit out compared to doing it from scratch in a lot of the instances (if one doesn't know what they are doing, that would compound the problem).

At some point, the well of new art/code will be poisoned and/or AI will get into an incestuous relationship with itself, consuming content done by other AI (or even the same AI).
I can't write any real code, but I can get AI to write me Illustrator scripts that I wouldn't normally have easy (or free) access to otherwise. I also use it to compare text documents for changes/variations. If I was smarter, and made use of a lot of lists, or made it a point to heavily plan things out, they're great for that as well.
If that, how can one really trust what is being spat out? Or is "close enough" not just for horse shoes and hand grenades, but for AI as well? Some C libraries that I know well, AI can't generate code for at all. At all. If one is using this to help them learn and to obfuscate the need to learn, going to run into a road block with this. Now, given the group here, if talking about art, it does horrid things there, especially with the extremities, but even things like zippers "melting" into skin or just how some body parts attach to other body parts. Are those things that can be addressed, perhaps, given that AI "learns" in a different way and how it produces the end result, I don't know actually how far of progress there will be using the LLM method.

I'm not entirely optimistic that it will "make life better" overall for humanity, but I do think it's pretty exciting technology. My take is that it will will get to the point that it's so good, that the difference between the haves and the have-nots will be if you can afford the $500-1000+ subscription per month. I also think it's advancing quicker than most people realize...
The problem that I have, and I have seen it work itself out this way thru different iterations, is that the future generations that know nothing but the tool, will have a much harder time when they are without it (for whatever reason). Already have an issue with what they call the "AI pause", even though the user may know what to do, they stop and wait for that prompt to show up to tell them what to do. That actually over time is far more inefficient and I would hate to see what happens when the chatbots aren't working for whatever reason. Oh wait, we had that a few weeks ago when most of the more well known bots were down, all close together and there was a drag on work due to that.

How many people here complain about younger hires not knowing basic stuff, sometimes as basic as reading measuring tape? Or doing simple math in their head (I have seen some young people break down and not able to give change if their POS system is down).

I don't necessarily have a problem with abstractions, I just think the approach of learning and how to handle the use of abstractions is where the problem is. Even our "desktop" applications now are suffering, because devs are used to abstractions (in this case the web browser).


I would, however, suspect that corporations will love this and why I'm not surprised Adobe is pushing it as they seem to be more in the data market compared tot he creative one and most of their customers now are going to be corporations (and it would make it easier for people to be "excited" about their "software" again as it is kinda long in the tough otherwise). But that is just me speculating.
 

Pauly

Printrade.com.au
I have gone thru several iterations of tooling like this, it always has tended to be that the easier stuff could be replaced with tooling like that, however, when starting to get to the more advanced use cases, it tends to fall and fall hard. It would actually take more time to shoehorn what they spit out compared to doing it from scratch in a lot of the instances (if one doesn't know what they are doing, that would compound the problem).

At some point, the well of new art/code will be poisoned and/or AI will get into an incestuous relationship with itself, consuming content done by other AI (or even the same AI).

If that, how can one really trust what is being spat out? Or is "close enough" not just for horse shoes and hand grenades, but for AI as well? Some C libraries that I know well, AI can't generate code for at all. At all. If one is using this to help them learn and to obfuscate the need to learn, going to run into a road block with this. Now, given the group here, if talking about art, it does horrid things there, especially with the extremities, but even things like zippers "melting" into skin or just how some body parts attach to other body parts. Are those things that can be addressed, perhaps, given that AI "learns" in a different way and how it produces the end result, I don't know actually how far of progress there will be using the LLM method.


The problem that I have, and I have seen it work itself out this way thru different iterations, is that the future generations that know nothing but the tool, will have a much harder time when they are without it (for whatever reason). Already have an issue with what they call the "AI pause", even though the user may know what to do, they stop and wait for that prompt to show up to tell them what to do. That actually over time is far more inefficient and I would hate to see what happens when the chatbots aren't working for whatever reason. Oh wait, we had that a few weeks ago when most of the more well known bots were down, all close together and there was a drag on work due to that.

How many people here complain about younger hires not knowing basic stuff, sometimes as basic as reading measuring tape? Or doing simple math in their head (I have seen some young people break down and not able to give change if their POS system is down).

I don't necessarily have a problem with abstractions, I just think the approach of learning and how to handle the use of abstractions is where the problem is. Even our "desktop" applications now are suffering, because devs are used to abstractions (in this case the web browser).


I would, however, suspect that corporations will love this and why I'm not surprised Adobe is pushing it as they seem to be more in the data market compared tot he creative one and most of their customers now are going to be corporations (and it would make it easier for people to be "excited" about their "software" again as it is kinda long in the tough otherwise). But that is just me speculating.

I bet that's what everyone said about the internet & the knowledge base vs the current libraries of encyclopedias,
 

WildWestDesigns

Active Member
You could start by testing it.
I already have:

Some C libraries that I know well, AI can't generate code for at all.
It gets names of functions wrong and replaces them with them that sound good, but are not. But it is not functional code at all. Now, if I was probably doing a python program, it probably has plenty to scrap and create something at least somewhat usable. However, again given post scrapping cleanup, is it still worth it? This is perhaps the biggest problem with even the current auto conversion tools that most on here are used to.

And here as well:
Now, given the group here, if talking about art, it does horrid things there, especially with the extremities, but even things like zippers "melting" into skin or just how some body parts attach to other body parts.

I didn't pull this out of thin air.

Yes, I have tested it. If nothing else to be able to cogently try to form an opinion on it.

Now, I'm sure that with any tech, there will be advancements and what I have tested here may not be applicable in a few months time, sure, but it's still doing it now. And while they improve the LLMs, there will be people that will try to poison the well. I can see some things being behind more paywalls and not so easily accessible as it once was or it will just stop being done at all and that would be less for the LLMs to "learn" from.

The biggest thing would be in house version and that would probably be a good middle ground. Don't know if that would happen or not. Sure may have open source ones that could be done that way, I think llama is one. I have a feeling that in some areas, if the laws don't change, that will probably be the route that most go, is in house versions.

I bet that's what everyone said about the internet & the knowledge base vs the current libraries of encyclopedias,

The closest analogy would be that of doing digital art versus traditional pen and paper. The difference between that point and now, is a whole lot of abstraction, much more compared to what it was just with going from traditional to digital. This isn't even going into the legal issues (at least here stateside, although I'm sure that will change and this would apply for code as well as art). The more the user knows what is being abstracted, the better the end result with something like LLMs and they of course, will be able to clean that up in post. The less one knows, it's going to be a rough time.

The problem is that the people that have the best chance of that are the users now, the 3rd and 4th gen users, actually not so much. We have seen that with even computer usage with people that are what have been called "digital natives". Not everyone, but an astounding percentage of them are that way. They aren't even used to a file system and how to interact with it. That would be more of my concern happening with this tool. Just like we have people that can really only use auto conversion and don't know how to clean things up after that conversion.



The one problem that I do have (and it has been illustrated in a bad way in recent years) is that it's too easy and quickly to change things to suit the now with regard to your quoted analogy. Some of that can be good, but it does depend on the application. So always having that constant ability to fiddle with things that is "live" like on the internet, can be bad. Not all instances, but enough of them have happened that make really need to vet sources that much more (a little bit harder to do if one doesn't have the knowledge base to vet them though, especially if one is learning something).

Wild West is keeping us up on new tech and opinions.
There will be a test on Friday.

No, there won't be a test. I just think that it's good to keep up with what the makers of tools that we depend on and in what direction that they are going in. Some may agree with, some may not. That is all.
 
Last edited:

Johnny Best

Active Member
No test? I studied all night on zippers melting into skin instances. But thanks for keeping us up to date on new tools that are developing.
 

WildWestDesigns

Active Member
No test? I studied all night on zippers melting into skin instances. But thanks for keeping us up to date on new tools that are developing.
Just think, you are ahead of the game compared to all the youngins out there. There is joy in going thru the process. Some people need that "paper chase".
 

Gino

Premium Subscriber
No test? I studied all night on zippers melting into skin instances. But thanks for keeping us up to date on new tools that are developing.

Is this gonna be like a jeopardy quiz ??

I'm still listening for robert. Think that'll help in the test ??
 
Top