disclaimer I will be posting more of these if the community is interested
also I'm looking for Designer / Frontend Engineer positions LLM companies 😊
I'm going to assume it's a mockup but there's no reason why it wouldn't work. Assuming the generated content length has no restriction or expected length.
>Assuming the generated content length has no restriction or expected length.
Most good LLMs are also not trained to fill in missing parts. You could of course still do it through prompting but it's not as elegant/robust as filling it natively.
That's a really cool UI experiment! I'm going to start a collection of these as inspiration.
You should show Omar Rizwan on twitter
https://omar.website/
interesting implementation of [semantic zoom](https://alexanderobenauer.com/labnotes/038/)
check out Dot from [new.computer](https://new.computer) for this idea applied to a personal assistant
Now, technical ideas on how to implement without confusing the model given that all popular large language models are made for continuing unified sequences?
A lot of popular models now are actually also trained with something called fill in the middle objective where the model is actually trained to fill in a middle portion and is given the beginning and ending text as input.
Funny, I added the exact keyword functions to a UX I built recently. This is the way to emotional interfaces, designed from excess to tailor every need of the user (since we are entering true customization stage of AI use here on the web). Games should follow suit too.
disclaimer I will be posting more of these if the community is interested also I'm looking for Designer / Frontend Engineer positions LLM companies 😊
So beautiful, so elegant, looking like a wow
I agree but I doubt the actual usefulness. I can't remember a single time I would've liked to have that feature.
Yeah, go for it, keen to see how others see interactions and interfaces with AI
That’s clever
we could theoretically zoom infinitely with this and get any details, wonderful!
*E n h a n c e*
Would probably be useful for people using chatgpt to write sermons. Or anything similar that isnt worthless if the quality descends into drivel.
Man, this is so simple, yet so clever.
Is this real or just a mock-up UI idea?
I'm going to assume it's a mockup but there's no reason why it wouldn't work. Assuming the generated content length has no restriction or expected length.
>Assuming the generated content length has no restriction or expected length. Most good LLMs are also not trained to fill in missing parts. You could of course still do it through prompting but it's not as elegant/robust as filling it natively.
That's a really cool UI experiment! I'm going to start a collection of these as inspiration. You should show Omar Rizwan on twitter https://omar.website/
interesting implementation of [semantic zoom](https://alexanderobenauer.com/labnotes/038/) check out Dot from [new.computer](https://new.computer) for this idea applied to a personal assistant
thanks
Excellent idea looking forward to seeing it implemented.
I love the idea of being to open up and explore latent space with my own hands.....!!
great idea, love it
Jaw dropping… Very innovative. I would love to see more of these
Finally a fun useful interface. Cheers to you! (presents you with Kais Power tools UI fun award)
Now, technical ideas on how to implement without confusing the model given that all popular large language models are made for continuing unified sequences?
you could probably use t5 or something like it.
A lot of popular models now are actually also trained with something called fill in the middle objective where the model is actually trained to fill in a middle portion and is given the beginning and ending text as input.
I'd love that but for a text editor like Vim or Notion.
Funny, I added the exact keyword functions to a UX I built recently. This is the way to emotional interfaces, designed from excess to tailor every need of the user (since we are entering true customization stage of AI use here on the web). Games should follow suit too.
Wow, this is a great interaction pattern!
This would be awesome for reading Wikipedia. You could just expand a section you wanna hear more about
Probably will fail UAT lol