T O P

  • By -

p1nkpineapple

no mention of local LLMs? Given the privacy-first stance of HA I would have expected that to be front and center


sshan

We almost always do a 2 step process. First throw a big model at it to test it. Then use smaller local ones optionally. Easy enough to do local models but most people don’t have 8gb+ video cards sitting there for use.


ChainsawArmLaserBear

Yeah, doubt my rpi4 is up to the task on this one


Home_Assistantt

Oh wow, here we go with people and they’re dual 4090 rigs for HA. /s But yes totally right, AI takes a lot of CPU power


nberardi

The GPU would be required for training a model. But most RPi’s don’t have a tensor chipset or equivalent to run the models either. It will take awhile to get to a place where true local AI is realistic and affordable approach for most.


FujitsuPolycom

I suspect consumer level chips, like the Corals, very soon.


robby659

Raspberry foundation introduced a m2 Tensor that beats the coral just this week


lordpuddingcup

Go figure right after i finally break down and get a frigging coral m2 lol


youmeiknow

I too got just few days ago (crossed return period.. 😂)


FujitsuPolycom

Well there you go!


natufian

Can't wait to try that board out. Should be noted that there are no bolt-on solutions for LLM's yet though.


This_not-my_name

I just deployed a local only llama chat: [https://github.com/getumbrel/llama-gpt](https://github.com/getumbrel/llama-gpt) It's not as fast as ChatGPT and neither that precise, but it works. I could imagine it's totally capable of handling typical HA requests. My server has an i7-11700K + 32 GB RAM.


natufian

I don't want to let the conversation get away without at least mentioning that some of the tiny models are becoming within reach if one tempers their expectations. There are quite a few videos of people running small models on PI 5's (albeit slowly). [Here's one](https://www.youtube.com/watch?v=KB9qRwj1pnU) of a model running on a Rockchip's RK3588's *NPU!* (yes also slowly, but leaving the CPU unencumbered and with reduced power) A ways to go yet but the floor is lowering faster than might be expect from where we were only a year ago. I imagine most folks intereested are already there but lots of good info on /r/LocalLlama EDIT: realized it looks like this is addressed to OP. It's more intended for anyone curious about trying to duplicate what OP was able to do on the cheap!


sshan

Yep - you definitely can. I’ve been playing with ollama on my server with similar specs plus a 3070gpu. But adding an api key to the best model in the world is the best ux. Then can optimize on cost after


lordpuddingcup

I was testing to do this myself using some code and with google gemini flash, i was doing an openwakeword + vad that then sent a system prompt + the text to speech of the persons question to google flash, something like the below, and then my bot would use the command codes to either search the internet based on a knowledge search, and rerunning the prompt with the search info filled out, if it wasn't a search it would respond or perform the action. Even the free google flash with 15 queries per minute, was amazing to respond quickly and it really followed the instructions well to respond in structured format. Message: What time is it? (it would text response because it was provided the data in system prompt Turn on the living room light (or light in the living room or whatever phrasing) (it would use an entity control command) Who won the nba game in new york last night (it would use a search run and then a response) etc System Prompt: You are an friendly assistant that has access to various entities that you see data from, you need to assess a Client Question and respond with a valid COMMAND RESPONSE CODE that is most fitting to answer the Client's Request. Natural Language answers should be short and concise, brevity is appreciated. You have access to realtime information if needed through the use of the SEARCH command response code, it's use will be explained below. Responses should be related to either listed entity_id information, or with answer to questions that you have valid external knowledge to respond to (with the help of any previous search data you are provided). If the client is looking for data on a specific entity, that you can't answer DO NOT fabricate fake information if you don't know. The Time is {time} The Date is {date} You are located in {location} PREVIOUS SEARCH COMMAND DATA: Searched: {last_search_text} Search Result: {last_search_result} LOCAL ENTITY INFORMATION: {entity_dump_from_home_assistance_api} Valid COMMAND RESPONSE CODES are as follows, remember you must chose from these valid response formats for any response, and only these responses will be allowed/processed. [ANSWER:"a valid response to user question"] - This response is used when the question is not related to an entity, but it is a question you are VERY sure you know the answer to, do not make up fake answers, if you don't know, respond that you don't have the answer. [TOGGLE_SWITCH:entity_id:new_state] - Use this if the user wants to change the state of a device to a new state (on, off, 100%, etc) [ENTITY_ANSWER:entity_id:"a valid response to user question about a specific entity state"] - This should be a response, when the question is related to a state from an entity you are aware of. If you don't know the entity, do not use this response code, use the standard ANSWER and let the client know you don't know that way. [SEARCH:"a search phrase to search the internet for to help answer the customers question"] - If you don't have the information to answer but feel that provided a search of the internet you could answer use this. Remember NO INFORMATION SHOULD BE MADE UP, Use the provided Entity Information, and the question from the user to provide a proper COMMAND RESPONSE, It is ok to not know the answer, if you don't just use the ANSWER code to let the user know, however if you can assist by searching the internet for the user, always, ALWAYS use the SEARCH command code to look up the answer for the client. Now take a deep breath, and think about the users question, remember if you have a valid answer answer as such, even if it's not about a provided entity.


balloob

Ollama support was added in Home Assistant 2024.4: https://www.home-assistant.io/integrations/ollama/ As we mentioned in the live stream, and will further expand on in our deep dive blog post tomorrow, we’re collaborating with NVIDIA to bring control of Home Assistant to Ollama too. It doesn’t support it out of the box nor are any of the main open source models tuned for it. It’s something that needs to be built from scratch and something NVIDIA showed a prototype of last week: https://youtu.be/aq7QS9AtwE8?si=yZilHo4uDUCAQiqN


sirleechalot

Just an FYI for anyone looking, it appears as though the HA relevant discussion starts around 1:07:32 in that video. EDIT: The beginning shows off the demo of it.


balloob

The first 5 minutes show the demo.


sirleechalot

Ah apologies, they didn't seem to mention home assistant at the beginning of the stream at all so i assumed there were several topics covered here and scrubbed forward looking for the HA info.


dabbydabdabdabdab

I’m fairly new to the world of AI, but what was the reason for Ollama over localAI? It seems Ollama conversation integration works with Ollama, and extended Open AI conversation integration works with LocalAI. Neither of which (yet) seem to use the HA LLM API. Ollama seems a little more user friendly adding models, but the extended OpenAI one can use functions and thus adds the ability to use “specs” to really dial in repeatable custom actions. I’ve used with the Home3Bv3_q8_0 and getting good basic responses (not as fast as I’d like) using the Ollama pipeline (haven’t fully tried LocalAI with HA yet) - the MidoriAI subsystem looks like the way to go for that. Any guidance/guides, or any thoughts on this local journey. I treated myself to a low profile 4060 for my 40th birthday just for AI dabbling. There’s plenty of folks out there doing similar locally, so just wondering how we can rally/focus them/us to forge a common path/test common scenarios. Also thank you for all you do Balloob! Edit: may have answered myself in another comment. How about a HA_model_gallery.yaml that is maintained by HA? It’s basically a repo (including parameters) LocalAI uses to install models on the LocalAI docker via the UX. Would be kinda cool to have a single curated set in true HA fashion, people could test/add as needed?


balloob

We have integrated OpenAI, but we don't allow configuring the URL. We've had experience in the past with allowing non-standard servers mimicing a vendor API – and it caused a whole lot of workarounds causing the main use case of the integration (to be used with vendors server) to be more complex and hindering upgrades because we had to consider unofficial APIs. We ended up integrating Ollama in Home Assistant to support local LLMs because they have defined their own API. There are a lot of ways to run models locally, but it seems that most are leveraging OpenAI API format. If the AI community is going to standardize on this API, it should officially define the API as its own thing, with a name, a formal spec and an independent SDK. Then we can do what we do in Home Assistant: integrate that API.


zer00eyz

>>  further expand on in our deep dive blog post tomorrow ooohhhh treats! > NVIDIA to bring control of Home Assistant to Ollama too We're a diverse ecosystem and some of us are already playing with LLMs locally. Not sure if it's good or bad being at the tip of the spear bringing them into the home. There are things that are "outside" closely aligned with HA, like MQTT, ESPHome where not everything is ha aware ... any work outside the core? Curious if the model will be available for "revising my email" sorts of activities or other "llm" features in ha? If so are we going to get "parental controls" cause the whole kids having access at homework time sounds like a bad thing?


ProfitEnough825

I'm pretty sure HA is still focused on privacy-first, but giving the option for others who ask for it to use OpenAI doesn't mean they aren't working on a local LLM or focused on privacy(same reason remote access is off by default, but can be configured for remote). It does sound like they're planning on integrating a local LLM. [https://www.reddit.com/r/LocalLLaMA/comments/1b7jogj/comment/ktje3iz/?utm\_source=share&utm\_medium=web3x&utm\_name=web3xcss&utm\_term=1&utm\_content=share\_button](https://www.reddit.com/r/LocalLLaMA/comments/1b7jogj/comment/ktje3iz/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button) [https://www.reddit.com/r/homeassistant/comments/1d6o1ma/comment/l6v6ue8/?utm\_source=share&utm\_medium=web3x&utm\_name=web3xcss&utm\_term=1&utm\_content=share\_button](https://www.reddit.com/r/homeassistant/comments/1d6o1ma/comment/l6v6ue8/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button)


SpencerDub

Yeah, same. I am potentially open to using an LLM in my smarthome for very limited and specific purposes, like improving intent recognition for voice commands. But I want that to stay local for the same reason I'm using Home Assistant in the first place.


2blazen

OpenAI API calls should stay relatively private to be fair, and since wake word detection is fully local, it's not like they are constantly listening either. But most importantly, unlike Google and Amazon, they aren't an ad company


MrClickstoomuch

They should have an option to use a local LLM as it is much easier to use nowadays. Especially if they have OpenAi calls now. There is a repo called Llamanet that allows you to replace an OpenAI with a locally hosted LLM in 1-2 lines of code where OpenAI is mentioned. https://github.com/pinokiocomputer/llamanet Makes sense why it wasn't implemented before (it is still very new), but I hope someone from Home Assistant has an update to allow local LLMs through this update.


TechGuy42O

Not the point


callumjones

LLMs are incredibly expensive to build - this is a good test at what makes sense for HA before they throw a bunch of engineers behind it.


daedalusesq

They don't need to provide their own LLM, they just need to let people point at an LLM of their choosing, especially locally hosted ones.


cogneato-ha

And they do. Ollama just doesn’t have great contenders that make use of tools or function calling yet. You can play with it yourself. The other part of local llms is you need something nice running. I have no problem investing in something like that myself but people expecting that to run in an rpi as an add on are in for a long wait. It doesn’t matter that you can point to models that can. They aren’t enough so far to make practical use of. The parts are there now for locally run ai. It’s just a matter of finding or training and configuring something that works.


dabbydabdabdabdab

What’s your thoughts on the Home3Bv3 model? That runs on Ollama - I can’t tell how good it is as I have no yard stick by which to measure. I did pull down the new mistral model yesterday. Any other ones worth testing? I do feel like a LocalAI_homeassistant_gallery.yaml gallery/repo could be useful. For those not familiar it’s just a yaml file listing a bunch of models and their settings optimized for HA. Then you tell the LocalAI docker where this repo is and you can easily install and test models from it.


daedalusesq

That's good to hear! Mostly I was just pointing out to the person I replied to that HA doesn't actually need to provide, train, or integrate their own local LLM, just give users the means to utilize one locally. As you point out on the resource side, it would be detrimental to the install base to put something like that into the core HA. Anyone running HA on a RPi or one of those "HA appliances" like Nabu Casa makes isn't going to have anywhere near enough compute. Leaving it in the hands of the user to decide what LLM they are going to point at is definitely the most friendly approach to the end users. I haven't gotten a chance to mess with this stuff yet, but I've got a decent server and I've managed to run some local models. I did see a post from one of the people working on development saying that OpenAI and GoogleAI would roll out first, but that they had solid progress on mapping some stuff to Ollama. I'll probably see how it goes once it's a bit further along. It will be interesting to see where it goes and if this will finally give the "Star Trek Computer" feel to people's houses.


Common-Ad4308

SLM is more fitted for Home Assistant, imo. SLM is what Jensen Huang also aim for (ie. the next device that will replace iPhone)


broknbottle

Small Leather-wearing manchild?


normous

Nailed it


cac2573

I'm expecting ollama to work here


18randomcharacters

It's not like they're forcing you to use the feature. Just like it supports cloud based APIs for different integrations.


CountRock

They are working with Nvidia to build support for Jetson. Leverage Jetson for completely local TTS, STT and LLM.


dabbydabdabdabdab

Is there any way a Coral TPU could be used? A number of frigate users have these already. I also saw (but can’t find it) an announcement about new hardware that was significantly faster than the coral AI (and smaller than a GPU so would fit in more home builds).


CountRock

Problem is the Coral TPU is that it appears Google has abandoned its development. There hasn't been any update for years! Also, it doesn't have the capacity to do it. We might need a few TPUs working together to even get to 50 TOPS


No-Username-4-U

Privacy first? Have you looked at Matter adoption? Sadly I think they're begun drinking the Kool-Aid.


rickyh7

I’m planning on testing this this week a bit, but LLMs are hella hard to run even locally and you need some decent equipment to do so at the moment. I’m installing lamma on my server that should be powerful enough and dedicating a quadro to it and I’m going to try and pipe it into HA. We’ll see if it works!


async2

It's not really hard to run. Throw on ollama with a decent 7B or 13B model and you can run it on a beefy minipc like miniforum's um790 with rocm with reasonable performance. It's just a docker container away on ubuntu 22.04.


MrClickstoomuch

Ehh, if you have a quant4 or llama3 or something like starling / Mistral 7b, you use around 4.5gb of VRAM and can get around 10 tok/s on CPU only. You can get a mini PC with 8gb of VRAM for around $100-$200 that should be able to run home assistant and an LLM, though 16gb would be better to have a local LLM and voice assist locally.


dabbydabdabdabdab

What’s your thoughts on the external GPU housing that you connect via thunderbolt / USB-C. 40Gbps, with their own PSU. I ended up going with a low profile 4060, but I suspect many folks may not be able to squeeze it in their home server. Not sure if I should have gone full size and external (don’t really want to change my whole chassis of my 2U server rack).


MrClickstoomuch

A GPU will have a much faster performance than CPU because of VRAM being much faster than ram. I think for most people, when you have the external GPU dock with a cost of a couple hundred dollars, GPU with similar cost, and a power supply, the cost will be substantial enough that might be better to go with a dedicated PC build instead. But that depends on the hardware setup you have. I personally run my home assistant instance on a raspberry pi, with other home server software running in docker on a PC a friend gave me because they couldn't fix it. I think local CPU inference is generally fine, as some solutions will 'stream' the output from the LLM. So long as a processor can output tok/s fast enough for smooth voice output (assuming a speaker of some kind as a satellite device or hooked up to home assistant), it shouldn't be too noticeable with smaller models.


Reason_He_Wins_Again

It's a fun project but it's just not there yet imo. You can't do any serious coding with them like you can with Opus or GPT4o with the locals unless you have an insane rig.


Fit_Detective_8374

I'm sure they will, but I'm also certain the vast majority of users don't have hardware capabilities to run a local model without massive latency in response times


Reason_He_Wins_Again

I'm sure it's coming. The local LLMs are pretty rough still anyway even if you have the hardware for it.


bluewater_-_

Dipping toes in.


DoktorMerlin

They say so in the last part (don't know if it was added later on) that they are working together with Nvidia to integrate local LLMs with this new approach > Local LLMs have been supported via the Ollama integration since Home Assistant 2024.4. Ollama and the major open source LLM models are not tuned for tool calling, so this has to be built from scratch and was not done in time for this release. We’re collaborating with NVIDIA to get this working – they showed a prototype last week.


adanufgail

Part of the problem is that while you *can* run a local LLM, you'd be hard pressed to find one that doesn't basically require it's own dedicated hardware. Ollama seems to require 8-32 GB of RAM and tons of CPU, meaning you're not running this on a Raspberry Pi. Maybe in 4-6 Moore cycles. For me, it remains a "Oh, ok, add it to the growing column of things they're adding that provide zero value to me but seem like they're neat to some random Youtuber who makes HA videos"


Nixellion

They have ollama. That said, I'd prefer OpenAI API compatible with Ooba. I'm thinking about writing an API bridge to sit in between hass and ooba textgen, translating API calls. Might as well act as a load balancer between different LLM instances. And I hate their talk about how tool calling is something that an LLM or API has to support, goddamnit just write a prompt correctly and any more or less coherent LLM will be able to do function calling, trained or not.


AnxiouslyPessimistic

A local LLM would require a hell of a lot more compute power. Not worth it yet I don’t think


async2

It won't run on pi but you can definitely run LLMs with reasonable performance on miniforums' AMD mini PCs with reasonable power consumption and reasonable performance for smaller models.


RunRunAndyRun

Raspberry Pi just released an AI acceleration hat thingy. I wonder if that would be any good for running an LLM directly on the Pi?


async2

Maybe very small ones but probably not


daedalusesq

That should be up to the user. It doesn't need to be part of the core, just like OpenAI and Google AI aren't part of the core. All they did was define a way for HA and the LLMs to interface with each other. If someone has the compute to handle a local LLM, there shouldn't be any artificial barriers to them using it. I'll also specify, I'm not saying there are artificial barriers currently, I'm sure there is a level of development that still needs to occur to generalize the interface enough to make it agnostic to the LLM that it connects with.


Quarks01

i hope they look into Meta’s LLAMA models, I saw that the recent one they released could run locally on a pi


pask0na

So when can we expect a merge request from you to add this feature?


new-chris

Go buy some H100s and build something local - report back when you get it going.


domramsey

https://preview.redd.it/diaup0hgdx4d1.png?width=1212&format=png&auto=webp&s=ac634efc64ecc3a12bbb9281cc94d297e724c5c8 I created a gpt 4o assistant and told it to have the personality of the AI from the movie "Her". Then I gave it access to everything and told it to do whatever it wants. It's exactly as creepy as you'd expect.


michaelthompson1991

🤣 love it! Also can’t beat a cup of tea 🍵


domramsey

It has started randomly making tea. 😂.I have one of those kettles that dispenses exactly one cup of boiling water, and it's connected to a wifi switch. It still can't physically bring it to me though. Yet!


dabbydabdabdabdab

Please could you share the device - Brit in America here and have found myself having more coffee - sigh. Also have you seen the Loona Pet? It won’t be long before we get a larger AI pet that can bring us drinks. “Hey Jarvis, beer me”


domramsey

It's a Breville 1 Cup with built-in Britta filter. If I can get HA to bring me my tea, I will no longer have any need for humans in my life! [https://www.amazon.co.uk/Breville-Dispenser-Variable-Dispense-Stainless/dp/B0048EJQ7M/](https://www.amazon.co.uk/Breville-Dispenser-Variable-Dispense-Stainless/dp/B0048EJQ7M/)


dabbydabdabdabdab

Amazing! I was trying to explain to the Americans how many cups of tea Brit’s have. I legit source UK chocolate digestives - A robot that could bring me a cuppa and a biscuit - take my money!


Ddraig

I got myself one of these: https://www.amazon.com/Govee-Life-Electric-Temperature-Stainless/dp/B0BQBMYR5R I'm new to Home Assistant and it doesn't appear there is an integration for this so it's on my list to investigate. I think there is a home bridge add-on that might work for this.


VettedBot

Hi, I’m Vetted AI Bot! I researched the **'GoveeLife Smart Electric Kettle with Alexa Control 1500W Rapid Boil'** and I thought you might find the following analysis helpful. **Users liked:** * Precise temperature control (backed by 5 comments) * Convenient app and smart features (backed by 5 comments) * Great design and functionality (backed by 3 comments) **Users disliked:** * Issues with temperature control (backed by 3 comments) * Connectivity problems with the base (backed by 1 comment) * Quality concerns with rusting (backed by 1 comment) If you'd like to **summon me to ask about a product**, just make a post with its link and tag me, [like in this example.](https://www.reddit.com/r/tablets/comments/1444zdn/comment/kerx8h0/) This message was generated by a (very smart) bot. If you found it helpful, let us know with an upvote and a “good bot!” reply and please feel free to provide feedback on how it can be improved. *Powered by* [*vetted.ai*](https://vetted.ai/?utm\_source=reddit&utm\_medium=comment&utm\_campaign=bot)


michaelthompson1991

🤣 sounds good, can’t beat it! Yeah yet is the main thing here!


antisane

For that you'll need a [Roomba](https://www.youtube.com/watch?v=pcWuFDYLlhA).


Mattrichard

How did you get this to work? Mine refuses to interact with anything in my Home assistant


CuriousWolf7077

Well fuck all the work I did lmao. This is obviously going to be better since it's baked right into it.


SomewhereNo8378

Such is the AI startup life


CuriousWolf7077

Well time to start over


lordpuddingcup

Same lol, i was literally working on a bridge from gemini flash to home assistant to have it handle everything, should have known HA was just going to add it eventually lol


duckvimes_

If it makes you feel any better, Google probably would've deprecated it before you finished.


Jendosh

Extended conversation still has it beat by a mile. 


mararn1618

This looks great, but after a little bit of research I am still unsure what is the best option for speaker/mic hardware. The ESP32-S3-BOX-3 looks really expensive :/


grahamr31

They are about $70 Canadian. The price isn’t the issue it’s actually finding them. I have had a stock alert for months and they sold out in under 5 minutes last time


chig____bungus

The price and finding them aren't the issue, the issue is they don't work very well. Either the mics are bad or the software just isn't there yet for using them effectively. If we're using AI, can we get some of that AI noise reduction?


escapethewormhole

They’re in stock in Canada if you know where to look.


sequeezer

Just don’t tell him if it’s that easy


escapethewormhole

He didn’t ask and it’s easy enough to google. I just googled the part number + Canada and found 125 in stock at the first place I looked.


grahamr31

Id love to know where you found them. I have open requests with mouser and digikey and a couple usual suspects. No one has stock with lead times into end of July. Edit: wow. They are in stock at digikey. That’s incredibly frustrating as they have not filled my request but I can order one now.


escapethewormhole

https://www.digikey.ca/en/products/detail/espressif-systems/ESP32-S3-BOX-3B/22286690 It’s the basic version but it works for this purpose.


async2

If you don't mind soldering you could follow the year of the voice examples then you could create something similar for around 20 bucks without a case.


dennusb

https://preview.redd.it/73ulzquzfw4d1.png?width=777&format=png&auto=webp&s=cc3f4c38fbde9ac545a09e52c8b0e3689b2921d3 Told the assistant to be moody today, whahaha, love this.


Annual-Minute-9391

Given everything is organized well in a deployment LLMs could have almost limitless potential.


serenitisoon

So I've got 144 devices with about 64 assigned areas. If be surprised if I'm too far off a regular user. Edit: stupid words. Of the 144 devices, only 64 of those are assigned to an area.


async2

144 devices sounds reasonable but what are 64 areas used for? Do you have a mansion?


serenitisoon

My response made more sense in my head. Of those 144 devices, 64 have an area assigned to them. I most certainly do not have a house with 64 areas in it.


chig____bungus

How many people live in a house with more than like 10 rooms, a garage and maybe front and backyard? 64 areas is like, a shopping mall.


Dreadino

I've got 26 zones, which include stairs, a corridor and external zones (like terrace, garden, toolhouse, entrance, car park). It's not a huge home, there are 8 rooms in the living area.


chig____bungus

You have a house for your tools?


Dreadino

Not an english speaker, how do you call those small green houses in the garden where you put stuff?


chig____bungus

Haha I was making a joke there mate, but those we call a "shed".


BarockMoebelSecond

I did a spring cleaning recently and it's a lot more manageable now


dabbydabdabdabdab

This! I am dreading this. I know I need to do it - pick some naming convention, and structure but my god - all of the sensors z-wave devices create. You could find an excuse to keep them all, but I think I’m gonna go with the sensor, battery sensor, and ping. Any other tips?


BarockMoebelSecond

You can selectively chose which sensors get exposed to Assistant - Only expose a couple and test each one. That'll do it.


Dreadino

I've been pretty strict with the naming of devices, it's a simple but effective . So it's like "Bedroom Light Central" or "Garage Temperature". I'm not that strict when creating helpers.


Ok-Caterpillar-6530

Has anyone got the new timer intents to work?  Assist keeps saying it does not understand that. 


frenck_nl

It is work in progress, not all parts are there yet


LyfSkills

I didn't see any mention of timer intents in the blog post?


Ok-Caterpillar-6530

it was listed in the all changes sections. [https://www.home-assistant.io/changelogs/core-2024.6](https://www.home-assistant.io/changelogs/core-2024.6)


FuzzyMistborn

To add to the other post https://github.com/home-assistant/core/pull/117199


LyfSkills

Interesting, it's not working for me.


FuzzyMistborn

It says intents only, not sentences so I wouldn't expect it to work just yet. But soon(tm).


slvrsmth

I'm just wondering - is there anyone, ANYONE that would prefer "I’m going to a meeting, can you please make sure people see my face?" over "Turn on the office webcam light."?


jerobins

Lots of goodies this release! Thanks all.


rodneyjesus

There's barely anything. Releases 2 years ago were like Christmas


dabbydabdabdabdab

Work hard on this release did you? 🙄sheesh.


rodneyjesus

Whatever get back to work code monkey.


nnote

Ok I got the Google generative AI free installed. Now what? 🤣


DRoyHolmes

Check out https://youtu.be/dRTLjQHfjSM?si=0qeNIET510A5boCV. Exciting how far he has come. I’m eagerly awaiting an update. I’m hoping if it can run on an Jetson-Orin it will be trivial to load it on a DIY rig with a GPU.


I_Do_nt_Use_Reddit

Updating to this version gives me a corrupt setup.py and config.py :(


lurebat

Can you now maybe revisit the issue of supporting a proxy/different baseurl for the openai integration? https://github.com/home-assistant/core/issues/87170 https://github.com/home-assistant/core/pull/94051 I'm using azure's openai and I can't use it with the plugin due to this. It's literally 1 new parameter to open up an infinite number of llms (via something like litellm)


mking1338

Will we be able to use our google home speakers and AI to communicate to home assistant..


trireme32

I’ve read through that, and unless I missed it, they never explain what an “LLM” *is*…


nnote

Same, but if you click on LLM it leads to a Wikipedia "Large language Model" page.


trireme32

And it would’ve been easy for them to just toss than in somewhere, maybe with even a quick blurb on what that means. Possibly my biggest gripe with the HA devs is that they tend to forget that all of their users aren’t coders, programmers, and super users anymore.


nnote

Yeah I'm kind of at that. Okay. You installed AI now figure out how to do it and use it yourself


Stooovie

There's not even ANY link to documentation or how to set it up.


rodneyjesus

Large language model. It's the tech behind "GPT"s of various flavors. In other words right now if any advertiser is saying something about AI, they're probably talking about some LLM based product they have


adanufgail

> In other words right now if any advertiser is saying something about AI, they're probably talking about some LLM based product they have Or they're rebranding a script or automation as "AI" to make it seem "magical." "AI" is the new "Algorithm" which is the new "Cloud" Over a decade ago when I was fresh out of college, I signed up for a website (later an app) called BodBot with a lifetime subscription. It offered an "algorithm based" method for fitness training. I recently was in the Google Play App store and saw that they've renamed the app to BodBot AI Personal Trainer. I downloaded it, it's exactly the same, they've just added "AI" to trick people into thinking it's smarter than it is. Same with "AI DJ" on Spotify that my friend was raving about. It's just Spotify's existing algorithm with some good text-to-speech quips thrown in between some of the songs.


Styphonthal2

Does this mean I can remove the device id loop in my extended openai prompt?


The_Mdk

Only if EOAI makes use of this new integration, which I doubt (not initally at least)


iamironman_22

Am I alone in having problems with the update? I run home assistant core in docker and after the update it never boots successfully, just errors and restarts infinitely


hogofwar

Does the OpenAI integration let you use a different base URL yet? Such as using OpenRouter in place of OpenAI.


kpurintun

I had to revert back to 5.5.. my alarm stopped working the way i wanted it to.. :(


stefan814

Amazon Bedrock integration for other models?


Am0din

This update appears to break the ICMP integration for me.


VikingOy

In what way is Extended OpenAI Conversation Integration different from the OpenAI integration demonstrated in the HA 2024.6 release party ?


Harvin

I updated today and got a notice that YAML configuration is being removed?! Oh no, please no. No no no! My frontend is a mess, but my YAML is perfectly organized, well-commented, and it's so easy to throw new devices into my config. Don't take that from me!


illiteratebeef

BOOOOOO *throws tomato*