
Hardware Restriction Rules have been a bit confusing:
```
You are limited to the following or equivalent hardware specifications: **Multi-core processor (e.g., Intel Core i7/i9, AMD Ryzen 7/9)** for efficient processing.
```
No explicit requirements for RAM has been specified, is there a specific number for RAM?
A better IDEA for Hardware Restrictions would be to limit the **LLM by size**.
Example: Only open-source models under 13B are to be used.
I think they are always refering to inference so I woul say RAM would not make a big difference
RAM would probably make a difference when loading models on CPU.
Example: I can load huge models of the size of 30B/70B onto 256Gb of RAM, while inferencing it one-by-one.
But is that even valid is the question?