The approach here for 64-bit ARM on servers has been to say "it must run UEFI". Then the vendor-specifics get dealt with by UEFI and ACPI, and the kernel can simply assume those facilities exist (and the distro can install new kernels in the usual UEFI way without having to know how to flash them into 500 different bootloaders). You can argue the merits and demerits of UEFI (and people do!) but there's a lot of benefit in pushing for "all server hardware must work like this".
I wish Google would mandate that with newer Android versions instead of this /vendor crap in Oreo. Microsoft mandated UEFI for their phones, but unfortunately their bootloaders are still locked. If they weren't though, it'd be a great standardized platform for alt OSes.
newer android versions of qualcomm are coming with uefi
Why not Coreboot?
There isn't much abstraction of the underlying hardware in coreboot, which can be either a good thing or a bad thing.
Why core boot? Uefi is open source (edk, tiano) if you make it so.
Coreboot is also very x86 centric last I checked.
Coreboot has some pretty solid ARM support too. Modern Chromebooks (both x86 and ARM) use Coreboot with a "depthcharge" payload. You can even use Libreboot at this point to bring up an RK3288, and boot into ChromeOS or a traditional linux. Even cooler is that if you do your own build of Coreboot, you can provide your own OS verification keys, and use vbutil to sign your own OS and have custom verified boot.
>> Coreboot is also very x86 centric last I checked.
Actually RISC-V is fully supported:
Real hardware coming soon...
Isn't UEFI way more complex? But I'm not sure if that's a problem because of Windows and all the drivers it has to support, or if it's a problem with the specification. I believe it was the latter? I think Google mentioned that in its recent talk on removing Intel ME.
UEFI is more complex but also nicer for management.
Developing a runtime on UEFI is easy to setup and get going, you don't have that feature in coreboot.
because uefi gives more than booting support, it also acts as firmware interface even after kernel has booted up. Kernel can continue to use uefi services for hw specific things which are abstracted via uefi interface
We're now in the 3rd generation of Arm servers, all built to the same set of specifications with multiple vendor SoCs by tens of different OEM/ODMs - SBSA (Server Base System Architecture) and SBBR (Server Base Boot Requirements).
The goal of these specs was to make these servers as "boring" as possible, i.e. as similar to an x86 server so that neither OEM/ODMs nor IT guys have to be able to distinguish between supporting an Arm or an x86 servers.
This means that you will be able to boot the same binary OS distribution on every machine. There are no more BSPs like in the old 32-bit Arm world. This means that the OS does not need to know anything about clock domains, pin muxes, GPIOs or DVFS beyond the standard facilities exposed via ACPI. Like x86, machine error handling is firmware-first. PCIe works the same way as on x86. AP core boot up and power off is abstracted through the PSCI (Power State Control Interface). TrustZone is not available for OS use and is purely used to implement resident portions of firmware (RAS error handling, PSCI, SDEI).
For people asking why UEFI and why ACPI, the answer is very simple: because that's what 99% of all deployed servers use. Using something else is just a friction point for the OEMs, ODMs, IHVs and firmware vendors. It would also be a friction for anyone consuming these systems. Sometimes, you have to be the adult in the room and say that you don't need an "ideal" solution, but the existing solution will work. Plus, the UEFI+ACPI world only really works when you are able to make all systems look more or less the same, hiding all the nitty-gritty shitty bus accesses (I2C for power buttons, SPI for flash, GPIO etc) completely in the firmware, not for the OS to care about. The OpenPower ecosystem didn't see this at all, and they have an elegant firmware solution (hostboot + skiboot + petitboot), but... why bother? It's yet another way to boot, and it just makes their systems foreign to 99% of everyone making and using servers. The OpenPower booting was basically built for Google, but the reality of Google is very different from the realities of the world.
How is uboot fundamentally different than BIOS or UEFI/EFI? It's not like all the PC clones through the ages had the same bootup sequence for RAM and peripheral init.
uboot is more striped down than traditional INT call based BIOS because it doesn't provide a boot API, or any "runtime" services. Basically it does little more than early hardware init, and jump to kernel image. For example it doesn't have an execution environment for option roms. Its great if your shipping a device with a fixed hardware configuration and kernel, but what ends up happening is that it fails to provide "platform" abstractions to the kernel. The result is constant churn at the platform level because even simple operations like "I want to boot this kernel/configuration on the next boot" cannot be communicated from the OS/kernel to the firmware in a standard way.
UEFI OTOH, is a bad combination of BIOS and openfirmware, but has standardized an execution environment that allows device vendors to build standalone "driver" packages that enable booting off plug-in network boards/RAID controllers/graphical display/etc.. Those drivers can then either be installed in the firmware or provided on option roms. There is a higher level API so that grub/whatever can say read a config file written by the OS without having to know the underlying technology.
Basically uboot is great for devices that could do without firmware and just boot a kernel, uefi is useful if you want to have a standard environment usable by a generic kernel/OS across a wide range of devices because combined with ACPI AML/etc it abstracts away much of the underlying platform management.
It's different because uboot typically resides on what I would refer to as "end-user" storage. If it booted from an on-chip or on-board flash/ROM device and then loaded content from the end-user's device, you're right -- in that case it would be remarkably similar to BIOS/UEFI.
> Our goal was to develop a single operating platform across multiple 64-bit ARMv8-A server-class SoCs from various suppliers while using the same sources to build user functionality and consistent feature set that enables customers to deploy across a range of server implementations while maintaining application compatibility.
I wonder how successful this was. Previously, all x86 CPUs (including x86_64) would bootstrap into the same mode from 1970s CPUs and preserve all the functionality from the original ISA (we still talk to the RTC via inb/outb, e.g.). I suppose this changed a little bit after EFI/UEFI was offered?
ARM CPUs were not bound to this backwards compatibility so AFAIK every vendor could implement their own bootstrapping functionality, and therefore having a single bootloader was challenging/impossible? uboot is a popular basis solution but IIRC everyone provides their own tweak to suit their SoC. Does TrustZone normalize the bootstrapping process for ARM devices such that we can write a single bootloader binary and expect it to work the same way across ARM server SoCs?
Shamelessly stealing links from the discussion on https://lwn.net/Articles/738898/
I hope prices on high-performance ARM hardware can come down a bit. Currently there's nothing between Chromebooks and $3000+ servers. On the other hand, if I were in the market for a high-end server, it looks pretty competitive vs. Xeon or Epyc. Any good benchmarks out there?
Full 96 core baremetal ARM64 servers from $0.50 per hour:
Or Low end VM's from 3€ month:
HPC stuff: https://www.nextplatform.com/2017/11/13/arm-benchmarks-show-...
softiron.com for the OverDrive 1000.
Hmmm, there's a lot of interesting stuff over there. If I didn't already have a dedicated router, the ClearFog boards look like a very nice router platform.
Yeah, I'd love to get a couple of ARM servers for home use.
It seems like the ThunderX chips from Cavium are the most prevalent 64-bit arm marketed as server platforms. Very high core count, high memory capacity.. I've been hoping that these things take off because I love the idea: http://www.cavium.com/ThunderX_ARM_Processors.html
System76 has a server using this chipset: https://system76.com/servers/starling
and Gigabyte has a whole line of them:
SoftIron.com. Cheapest option (OverDrive 1000) is $599.
https://www.solid-run.com/marvell-armada-family/armada-8040-... - $369
https://www.phoenicselectronics.com will sell you a Gigabyte MT30-GS2 (Cavium ThunderX 32-core) 1U system for around $2k. If you want to provide your own ATX case - much less.
ThunderX2 and Qualcomm Centriq (3rd gen arm server) systems have been recently announced (as in GA), but those will set you back quite a bit because they're not toys. But if you look at the 1st and 2nd gen systems, those are quite approachable.
It should be pointed out that the solid-run machine, while in theory a decent machine isn't going to run Redhat, or for that matter much beyond the image its shipping with. That _may_ change, but right now its not quite done cooking.
That said, outside of the 10G Ethernet, its pretty much bested by just about every m-itx x86 board in that price range. Plus, if you happen to need the 10G, its still probably less expensive to pick up a G4400+motherboard+10GbaseT board (new dual port on ebay for about $100) and best that machine in most cases.
“besting” is very relative. Workloads in such footprints
are usually power constrained and I am not aware of any x86 solutions involving a discrete NIC that can do 20Gbps at 35W.
Yep, ACPI support is evolving, but it’s a matter of time. Folks have figured finally that if it isn’t compliant with SBBR and SBSA, then there will be plenty of competitors who will be, absolving you of the headache of dead-end BSPs.
Maybe something like http://asrock.com/mb/Intel/J3455-ITX/ (random hit with a J3455 core, $74 at newegg at the moment)
The base processor is rated at 10W for 4 cores, add ~8W for the 10GbaseT board (plus a bit) and its a competitive solution, particularly if your workload needs more RAM/etc. Feeling like you want a little more beef the Denverton's are hitting the market and many of them have the 10G integrated.
That one also has a BMC, which given my past experience tends to add a few watts too. Without testing them side by side its hard to know which draws more power in a given workload, particularly since the intel machines have become very dynamic over the last couple generations, its actually pretty hard to hit their TDP numbers in most cases (particularly without heavy FP workloads).
Now its become more about what the motherboard manufacture has integrated. Its all fine that the core/etc draws 8 watts. The problem happens when the motherboard manufacture decides to glue an aspeed BMC and an old marvell SATA controller on the board. Between them at idle they draw 2x more power than the SOC does running at peak. Its pretty easy to move the numbers one way or the other with unrelated changes.
EDIT: Discovered after posting that the PCIe on the asrock, is only x1 which keeps it from taking one of the x540T boards, but they have a couple variations including one for less money in m-atx which has a potentially better slot layout, and full size DIMM's.
Cavium and Qualcomm sell the "top" Arm servers. There's also AppliedMicro but not sure what kind of future they have there.
HPE is planning to offer Apollo 70 w/Cavium and/or Qualcomm's chips.
Anyone know where one can buy an ARM server to run this on?
Does anyone know the current ARM (equipment cost + ~0.9 utilization power costs) amortized over, say, three or five years compares to the latest generation of Xeon and Epyc?
CentOS and RHEL are different distributions. CentOS supports 32-bit x86 architecture, for example, while RHEL only supports x64.
My understanding was that CentOS is simply a rebuild of all RHEL packages with the Red Hat branding removed. If there's different arch support tho, then there's probably more to it than I thought.
Well, basically that's right and if they're rebuilding RHEL packages anyway, they can just rebuild them for more architectures. Of course that implies that important packages actually support those architectures. RHEL offers more than just binary distribution, they should support their product, so they decided to go with less architectures than theoretically possible. That's how I see it.
The new is that you can now get a support contract from RedHat for RHEL on ARM64. It has been possible to run it for a long time without such support.
I guess it's now supported. You can buy a support contract from Red Hat and when your ARM server has a problem, they'll help.
RHEL 7 only supported x86_64, so CentOS 7 only "officially" supports x86_64. While AArch64, i386, and other arch builds are available from the AltArch SIG, they aren't maintained by the core.
With RHEL 7 supporting AArch64 after this announcement, I'd assume CentOS will follow suit.
RHEL 7 dropped i386. But of course also continues to support Power and s390 (IBM mainframe).
CentOS 5/6/7: https://wiki.centos.org/FAQ/General#head-059f2f807ebb83e93f2...
RHEL 7.4: https://access.redhat.com/documentation/en-us/red_hat_enterp...
CentOS 5 and 6 are not supported based on the link you provided.
Enterprise-level support vs. use-at-your-own-risk science project.
I had the impression that CENTOS7.3 was available on baremetal ARM64 on scaleway since a long time. Is it really new ? Maybe I do not understand what this announce is about.
ARM64 was released on Debian and Ubuntu 5 or 6 years ago.. Why has it taken RedHat so long? They were part of the same ARM server club.