+49 (0) 421 9600-10
19 December 2023 -

Letters from Silicon Valley: Winter 2023 – a Journey into the world of AI

Letters from ...

Trends and technologies from the tech valley

AI at any cost or proportionate regulation? Silicon Valley is divided on the subject, as shown not least by the conflict surrounding OpenAI's CEO Sam Altman in the past few weeks. By visiting Silicon Valley as a delegation, we are going to find out exactly what things look like on the ground.

Tim Ole Jöhnk, Director of the Northern Germany Innovation Office (NGIO), reports directly from the USA on topics that have been hotly debated in Silicon Valley over recent days and weeks. Read our article and travel with us!

Our hot topics in the Winter of 2023:

OpenAI and the question of how to handle AI in general

© unsplash

When it comes to drama, Silicon Valley could teach many Netflix shows a thing or two. The turbulence surrounding OpenAI, the artificial intelligence company which created ChatGPT, and its CEO Sam Altman has also swirled around local news portals for some time.

We're going to investigate just what triggered all this chaos and caused an existential schism in the field of AI, a split which even Silicon Valley has not yet been able to bridge, and maybe never will.

OpenAI was founded as a research establishment whose aim was to create a safe AGI (artificial general intelligence = powerful AI) which would work to the benefit of humankind. The company itself has charitable status, i.e. it cannot make commercial profits. However, from its earliest days of operation, in 2015, it quickly became apparent that developing AI is an expensive business and that donations alone wouldn't cover the cost of achieving the company's own vision. For this reason, the company set up a profit-oriented subsidiary, which attracted Microsoft as a minority shareholder, bringing with it 13 billion dollars of investments. Despite this enormous amount of money, it was the Board of Directors of the non-profit part of OpenAI that retained the right to make decisions for the company as a whole.

However, the business's separation into profit-making and non-profit parts has provoked a number of conflicts over recent months (and maybe even years). OpenAI kept the research findings for its latest language model under wraps for a considerable length of time due to worries about how it could be used for negative purposes. Nevertheless, it was the meteoric rise of ChatGPT and its applications last year that increased the pressure on the profit-oriented part of OpenAI to create new and better models, with continuous optimisation, and keep ahead of the competition. It's a classic dilemma: the desire to generate profits and achieve more rapid commercialisation set against the pursuit of scientific knowledge and ethical concerns.

Which motivation is the stronger and which will prevail? What are the aims of OpenAI anyway? These questions came to light during the recent battle for the top job at the company. Even if Sam Altmann has returned to his former position as CEO at OpenAI, questions still remain about the company's future direction and whether it can operate independently of its shareholders' desire to generate profits. Let's not forget that, in the period that Altman was out of the company, Microsoft immediately offered new positions both to him and to all OpenAI staff that wanted to change jobs.

The dilemma facing OpenAI is not unique in the industry. For some time now, one issue under discussion has been the conflict between technocracy and techno-capitalism (acting with plenty of money and at great speed, entirely in line with notion "move fast and break things") on the one hand and a more sustainable, regulated approach on the other. Making progress for the sake of progress ("we do what we must, because we can") is countered by the desire for regulation and moderation, which is in turn coloured by the background fear that companies might be overtaken from both sides if others continue pushing on with development regardless of these concerns. AI is developing at such a pace that it's almost impossible for anyone to keep up with it and foresee the potential negative consequences. Tech giants such as Elon Musk have repeatedly warned about the possible dangers of progress that happens too quickly.

A conflict happening at a time in which calls for checks and regulatory frameworks ("AI Governance"), which have been issued for some time now, are being translated into actual measures, highlight the extent to which this technology has impacted society as a whole. As a result, 18 countries signed an agreement for the implementation of uniform AI guidelines in November 2023. Even though this agreement is not legally binding, the guidelines are an attempt to regulate the technology. The EU went even further in June by drawing up a legal framework as part of its AI Directive, which will come into force by 2026. Society sees the need for regulation and many in Silicon Valley agree.

Anthropic – the alternative to the alternative

Länderbrief Silicon Valley Herbst 2023 - Anthropic

A direct competitor to OpenAI is an example of this desire for regulation: the company Anthropic. It was founded by former OpenAI employees in 2021. Like OpenAI, it is also a non-profit organisation (although with a different legal form). Its stated aim of creating safe, reliable and comprehensible AI systems is also similar. Going forward, the company will be managed by a long-term benefit trust with a board of directors who have no financial interest in the company and whose sole aim is to ensure that AI systems act to the benefit of humankind in the long term.

However, something more interesting than Anthropic's organisational structure is the AI called "Claude", which has been created by the company. This AI uses a type of self-regulation instead of relying on human assumptions about what is good, safe and in the interest of humankind. At present, AIs are fine-tuned by human beings before they are released. The thousands of AI responses to specific questions ("prompts") are viewed and evaluated to see whether they meet a model's guidelines. For example, the responses must not be discriminatory or infringe human rights. In contrast, "Claude" relies on constitutional principles for AI (formulated by human beings), which it uses to fine tune its own responses, independently. It therefore automates its own learning process, with the aim of excluding bias and the innate prejudices that every human being has, including obviously, the people who evaluate and regulate AI models. Whether this will really work as intended or. like the three famous Laws of Robotics created by the SF author Isaac Asimov, is doomed to failure, is still to be seen.

But even Anthropic cannot avoid being drawn into the fundamental dilemma. The company has received around two billion dollars from Google and four billion from Amazon. Like OpenAI, Anthropic also has methods of working that don't appear to be entirely in harmony with its own company motto. For example, music publishers have brought a legal action against the company for alleged infringements of the copyright on song texts.

Even here, the only thing anyone can say is: trust is good, but control is better. And necessary.

A delegation trip to discover AI in Silicon Valley in February 2024


So, what does the future hold for AI? How can this technology navigate its way between unregulated progress and legal and social frameworks? Is this the end of the "wild West"? These are topics which must be addressed right now, so we can prepare ourselves for the future.

Discussions which will be held in our AI journey to Silicon Valley, from the 4th to the 9th of February 2024, led by Bremen's Senator for Economic Affairs, Ports and Transformation.

The visiting delegates will focus on AI applications for use in industrial production and manufacturing. On their six-day trip, participants will not only visit the major players in this sector in the USA but also have an opportunity to see the technology centres operated by German and European companies in Silicon Valley and meet up with interesting new start-ups. This will provide valuable insights into innovative ways of working.

In addition to visits to AI companies, official receptions and sector-specific meetings, the agenda will include discussions with experts about how the world of work might be transformed. We will keep you in touch!

Open Source AI Frameworks – Getting Started with AI

© unsplash

The furious pace of developments in AI in recent years have been made possible by, not least, a community which is open and happy to share knowledge. Many models and tools are Open Source and so freely available, free of charge, and, to a certain extent, free from copyright. This enables everyone to make contributions to this new field of knowledge and benefit easily from a reservoir of information to drive this technology forward at pace.

There are datasets which enable you to set up AIs quickly without having to collect millions or even billions of data points yourself and then train them by leasing expensive Cloud computing power.

Even major companies such as Google and Meta, with their Llama and BERT models, are helping grow the community (while also benefiting from the preliminary work). Interestingly, ChatGPT and Claude are not freely available.

In their case, Open Source is only a means of making the work easier. However, it does make things more transparent and controllable, which are central issues when it comes to having a responsible means of handling AI. This is also another powerful argument for getting involved in this community.

There are a multitude of such tools available nowadays. We've drawn up a list of the most important platforms, in 13 categories (the amount of capital investment in millions of US Dollars is given in brackets):

  1. Generative AI LLM Developers/Platforms: TOGETHER (Seed, ~20M) Hugging Face (Series D, ~395M)
  2. Machine Learning Training Data Curation Snorkel (Series C, ~135M)
  3. Synthetic Traning Data – Media ZumoLabs (Seed, 150t)
  4. Synthetic Training Data TONIC (Series B, ~45M) Gretel (Series B, ~68M)
  5. Vector Databases Chroma (Seed, ~20M) Zilliz (Series B, ~113M)
  6. Feature Stores & Management KASKADA (Series A, ~10M) FeatureBase (Series A, ~24M)
  7. Federated Learning Platforms OWKIN (Series B, ~305M)
  8. LLM Application Management Rasa (Series B, ~40M)
  9. Algorithmic Auditing & Risk Management Credo ai (Series A, ~18M)
  10. Model Development & serving OctoML (Series C, ~132M)
  11. Model Validation & Monitoring Fiddler (Seed, ~45M) Whylabs (Series A, ~14M)
  12. Hardware-aware AI optimization Run:ai (Series C, ~118M)
  13. AI Development Platforms: MindsDB (Seed, ~55M Funding) BentoML (Seed, ~9M)

Success Stories

Letters from ...
18 June 2024
Letters from Turkey: Summer 2024 edition

Turkey has discovered railways and is building on infrastructure. A way to greater resilience and a path out of the crisis? And ... how Bremen is refocussing its economic development strategy in Turkey.

Learn more
Investing in Bremen
30 May 2024
Locational factors for companies: seven convincing arguments for choosing Bremen

The right location for a business depends on many factors - infrastructure, location, but also labor supply and quality of life. Bremen convinces companies from Germany and abroad with numerous location factors.

Learn more
10 April 2024
Robots "Made in Bremen"

For the last 18 years, the German Research Center for Artificial Intelligence (Deutsches Forschungszentrum für Künstliche Intelligenz) in Bremen has been at the forefront of research into artificial intelligence, robotics and cyber-physical systems. The researchers are developing innovative solutions that will help human beings, whether on land, sea, in the air or in space.

Learn more