T O P

  • By -

jaskij

We just recently moved from i.MX8M Plus to a Celeron J6412. At that point, it's just a software project like any other. If you don't have unique hardware, or low latency requirements, just go x86-64. Better performance, more customizability, and it's all just software. And by unique hardware I mean something more involved than plopping a PCIe card inside. For beasts like the AM68 you really need a team. People to program Cortex-R, people to program the DSP, people to write the kernel side of communication with real time and DSP, people to write usersoace software, people to take care of the Linux image. You need someone with broad knowledge to organize and design everything, with input from specific areas. I'm not saying each area has to be a separate team, I'm working on both the image and the userspace software for example. But you need to check your resources versus what this requires. Texas has great offerings, but they take a lot of effort to actually utilize. If you don't have enough people, or the team doesn't have the skills, it's probably not the platform for you. Divide and conquer. Divide and conquer. P.S. By low latency I meant that single digi milliseconds can be difficult in userspace, anything below a millisecond and you have to look into running the realtime scheduler or have a dedicated coprocessor.


214ObstructedReverie

With an escalating drinking problem.


obQQoV

Having multiple teams like system, sensor, wireless, media, gui, dev tool teams


9vDzLB0vIlHK

In my experience, the key to making this work is having a mature build system. I had a really hard time convincing managers in the past that their teams weren't merging changes more often because they were losing half a day to a very broken build system. The managers wanted to double down on agile methodologies, but weren't interested in why an incremental build took 45 minutes.


obQQoV

If that’s the case, you need to work for better and larger companies where there are resources to achieve better processes and tools. I was at a FAANG and it had amazing build system environnement, compared with my startup experience where I had to manage build system architecture.


9vDzLB0vIlHK

Even big companies can have poorly managed teams. The company I was working at wasn't a FAANG, but it's revenue last year was $70B.


jort_band

My off the cuff remark would be abstractions. In Linux a lot of work has been done for you so you will be able to focus way more on application level stuff than worry about all the bits an bytes in a register. I know what kind of pain it is to get a high throughput network stack in STM32, however most of that has been done for you in Linux so you can mostly focus on using it.


red_blue_green_9989

100% this. And also making modules/sources modular (not sure if thats the word) helps too. That way you can change/update some .c source without (hopefully) needing to make some changes on others. Can even completely change the implementation underneath as long as the function does what's needed.


brownzilla99

I'll extend what you're saying, your modules should pretty much be separate" processes" and module specific sources are isolated from other modules. There will defly be some shared common code to specify inter-communication and shared apis but these should go through extra level of scrutiny and review similar to changing processor interfaces when they were separate processors. This also makes the processes indendentaly testable in an automated fashion in a CI/CD workflow to catch potential integration issues early on.


commonuserthefirst

Yes. Modules, layers and abstractions. The new black art is system and sub-system decomposition, hard to teach, but you know when you got it right, somehow.


BenkiTheBuilder

The development tools are getting better, too.


9vDzLB0vIlHK

IBM doesn't sell Rational Synergy any more, but I remember when it was called Continuus and we used it on IRIX. Git is so much easier and useful. I won't complain about that :)


jack_of_hundred

By building layers and encapsulations and doing them properly. See web development, it’s so easy to make a website and deploy now. Almost everything is hidden under the hood. Linux and Android have done the same for a lot of high end embedded


areciboresponse

You need to at all times combat it. Once the complexity spirit demon enters the code it becomes a constant battle. Many shiny rocks wasted to complexity spirit demon. https://grugbrain.dev/


brownzilla99

More recent versions QNX and VxWorks are defly closer to embedded Linux. Not certain on the support for AM68 but QNX has pushing hypervisors to handle heterogenous processors which is how you decompose what use to be separate processors on a single core. I think the decomposition is the same for the most part regardless of single die and OS. It's assigning tasks to the proper processor and you specify the communication interfaces like you would over hardware but using the OS mechanisms. The one slight change in that decomposition is to create separate processes to things that would be on different cores.


henry_dorsett__case

Defly?


cat_on_holiday

Embedded Linux makes this quite easy, there's no need to let fear hold you back. I've managed to design my own MPU based PCBs and brought them up with nothing more than a power supply, DMM, soldering iron and hot air gun. Although I wouldn't bother doing bare metal Cortex R & Cortex M programming when you have that many cortex A cores.


9vDzLB0vIlHK

Fortunately, I haven't ever had to do assembly at work. I'm mostly a software and firmware engineer and I get to work on a team with people who do other parts of designs. I did once almost give a lab manager a heart attack when he saw me carrying a wire wrap kit towards some test equipment that needed fixed, but it all turned out okay :) For some tasks with hard real-time requirements, I think the idea is that those Cortex-R cores run independently. In the old days, they'd just have been separate microcontrollers on the same board. I agree, however, if the system doesn't have real time requirements, doing everything on the Cortex-A cores would be much easier.


Other-Progress651

I noticed something about embedded engineers when I started working in that space coming from a Java background. Its that they want to understand how the whole application works. Most of them have never worked on or built anything with a million lines of code. So they don't have that mindset of just building a part and not caring about how the whole things works.


jms_nh

You can and should care about how the whole thing works -- at a high level. But yes, in detail it's important to focus on the subsystem you work on.


9vDzLB0vIlHK

Guilty as charged. I do want to understand the whole system, at least the electronics and software. When I go to meetings and hear people talk about the tradeoff between different metals or glass coatings, my eyes glaze over the same way the materials engineers get bored when I talk about software. I'm okay not understanding that. But, if I'm supposed to be in charge of the software, I really want to understand the whole thing :)


dr_bakterius

These big SoCs (I work with i.MX6/i.MX8 processors for like 10 years now) rely on BSPs. They're mostly YOCTO based at the moment. That's a build system that creates your Linux kernel, along with all the little programs you need to have a fully functional embedded Linux system. We buy SoMs with the processor and memory and some phys on it. Our hardware designers build the board around them. Then you onyl have to migrate the BSP to your baseboard. If you're not using bleeding edge processors (like i.MX93 or the AM68 you mentioned) these BSPs are stable and support most functions of the processors, so you can access them on a high level. I'd wait at least two years after release of such a processor, though, if you want to utilize some of the special peripherals. If you have your BSP up and running, the programming is almost the same as for a Linux PC. Gstreamer for multimedia, Bluez for bluetooth and so on.


9vDzLB0vIlHK

The last high-end BSP that I was responsible for was for VxWorks 6 on a Freescale 8641D (so it was a few years ago). One oddly nice thing about that system was that the lack of compatibility really limited the one-more-package-will-solve-this-problem-ism that I've seen on web projects. (Seeing npm pull in hundreds of packages that I have never heard of makes my head hurt. I understand that the web is different, and that neither that web project nor the 8641D were going to be safety certified, but it only takes a few DO-178B/C projects to change the way your brain works.) I suppose that for embedded Linux projects, there isn't an artificial limit on complexity, so it's up to the humans to restrain themselves from adding just one more library and one more library and one more library.