Mark Zuckerberg Is Unironically Based Now

Arguably they’re getting into the operating system space, if you believe what they believe, which is that VR headsets are the computers of the future.

Once they start producing hardware that’s actually mainstream I’d expect their contributions to open-source computer operating systems and hardware to decrease. Usually the common pattern with companies that are good about contributing to FOSS. Remember once upon a time Google was basically the FOSS golden child until they weren’t, maybe Facebook has taken up that mantle but only temporarily.

2 Likes

That might be true about VR glasses or contact lenses. But imagine everyone wearing these VR headsets, it’s just ridiculous, and I hope that this future never comes.

If I exec my FOSS code on Intel CPUs or Nvidia GPUs for my compute, is it not FOSS?

To be clear, having open data is a good thing, but I see it as distinct from releasing model weights under permissive license, themselves.

1 Like

I’m not really sure how the comparison you’re making applies.

From my point of view, an LLM like Llama is essentially a binary you can run on your computer. To build that “binary” yourself, you would need the model weights (i.e. the source code).

There’s no other context in the software world where a free-to-use binary that you can’t build yourself is considered “open source.” I’d say Llama is freeware, but it’s definitely not open source or even source available.

2 Likes

I don’t really know much about AI, but even if it could be built into a binary by yourself, wouldn’t building it require a massive amount of computing power?

The point of open source isn’t always necessarily to build everything yourself, but to be able to learn from and improve upon existing examples of knowledge in the world. Nothing about AI can really be learned from things like Llama, and nobody can take it and build something better even if they did have the computing resources to do so:


I guess to make this analogy work, Llama would have to be the equivalent to “Intel CPUs or Nvidia GPUs” in this scenario.

You can build open source applications which use Llama or other freely available LLMs, but the LLMs that your open source application use aren’t open, in the same way that the Intel CPU your open source application runs on isn’t open. So Llama itself is not FOSS, but FOSS apps can utilize Llama, right? :slight_smile:

3 Likes

Yes, in this analogy LLMs (multi-modal or otherwise) are compute engines that run natural language instructions (see / mirror).