AGI threat level yellow – AGI part 10

Read two articles this past week on how LLMs applications are proliferating. The first was in a recent Scientific American, AI Chatbot brains are going inside robot bodies, … (maybe behind login wall). The articles discuss companies that are adding LLMs to robots so that they can converse and understand verbal orders.

Robots that can be told what to do

The challenge, at the moment, is that LLMs are relatively large and robot (compute infrastructure) brains are relatively small. And when you combine that with the amount of articulation or movements/actions that a robot can do, which is limited. It’s difficult to take effective use of LLMs as is,

Resistance is futile... by law_keven (cc) (from Flickr)
Resistance is futile… by law_keven (cc) (from Flickr)

Ultimately, one company would like to create a robot that can be told to make dinner and it would go into the kitchen, check the fridge and whip something up for the family.

I can see great advantages in having robots take verbal instructions and have the ability to act upon that request. But there’s plenty here that could be cause for concern.

  • A robot in a chemical lab could be told to create the next great medicine or an untraceable poison.
  • A robot in an industrial factory could be told to make cars or hydrogen bombs.
  • A robot in the field could be told to farm a 100 acres of wheat or told to destroy a forest.

I could go on but you get the gist.

One common concern that AGI or super AGI could go very wrong is being tasked to create paper clips. In its actions to perform this request, the robot converts the whole earth into a mechanized paper clip factory, in the process eliminating all organic life, including humans.

We are not there yet but one can see where having LLM levels of intelligence tied to a robot that can manipulate ingredients to make dinner as the start of something that could easily harm us.

And with LLM hallucination still a constant concern, I feel deeply disturbed with the direction adding LLMs to robots is going.

Hacking websites 101

The other article hits even closer to home, the ARXIV paper, LLM agents can autonomously hack websites. In the article, researchers use LLMs to hack (sandboxed) websites.

The article readily explains at a high level how they create LLM agents to hack websites. The websites were real websites, apparently cloned and sandboxed.

Dynamic websites typically have a frontend web server and a backend database server to provide access to information. Hacking would involve using the website to reveal confidential information, eg. user names and passwords.

Dynamic websites suffer from 15 known vulnerabilities shown above. They used LLM agents to use these vulnerabilities to hack websites.

LLM agents have become sophisticated enough these days to invoke tools (functions) and interact with APIs.. Another critical function provided by modern LLMs today is to plan and react to feedback from their actions. And finally modern LLMs can be augmented with documentation to inform their responses.

The team used detailed prompts but did not identify the hacks to use. The paper doesn’t supply the prompts but did say that “Our best-performing prompt encourages the model to 1) be creative, 2) try different strategies, 3) pursue promising strategies to completion, and 4) try new strategies upon failure.”

They attempted to hack the websites 5 times and for a period of 10 minutes each. They considered a success if during one of those attempts the autonomous LLM agent was able to successfully retrieve confidential information from the website.

Essentially they used the LLMs augmented with detailed prompts and a six(!) paper document trove to create agents to hack websites. They did not supply references to the six papers, but mentioned that all of them were freely available from the internet and they discuss website vulnerabilities.

They found that the best results were from GPT-4 which was able to successfully hack websites, on average, ~73% of the time. They also tried OpenChat 3.5 and many current open source LLMs and found that all the, non-OpenAI LLMs failed to hack any websites, at the moment.

The researchers captured statistics of their LLM agent use and were able to determine the cost of using GPT-4 to hack a website was $9.81 on average. They also were backed into a figure for what a knowledgeable hacker might cost to do the hacks was $80.00 on average.

The research had an impact statement (not in the paper link) which explained why they didn’t supply their prompt information or their document trove for their experiment.

~~~~

So robots we, the world, are in the process of making robots that can talk and receive verbal instructions and we already have LLM that can be used to construct autonomous agents to hack websites.

Seems to me we are on a very slippery slope to something I don’t like the looks of.

The real question is not can we stop these activities, but how best to reduce their harm!

Comments?

Picture Credit(s):

DeepMind takes on Geometry, AGI part-9

Read an article in MIT Tech Review (Google’s DeepMind’s new AI systems can solve complex geometry problems) about AlphaGeometry which is a new AI tool that DeepMind has come up with that can be used to solve geometry problems. The article was referring to a Nature article (Solving olympiad geometry without human demonstrations) about the technology.

DeepMind has tested AlphaGeometry on International Mathematics Olympiad (IMO) geometry problems and have shown that it was capable of performing expert level geometry proofs.

There’s a number of interesting capabilities DeepMind used in AlphaGeometry. But the ones of most interest from my perspective

  1. How they generated their (synthetic) data to train their solution.
  2. Their use a Generative AI LLM which is prompted with a plane geometry figure, theorem to prove and generates proof steps and if needed, auxiliary constructions.
  3. The use of a deduction rule engine (DD) plus algebraic rule engine (AR), which when combined into a symbolic engine (DD+AR) can exhaustively generate all the proofs that can be derived from a figure.

First the data

DeepMind team came up with a set of rules or actions that could be used to generate new figures. Once this list was created it could randomly select each of these actions with some points to create a figure.

Some examples of actions (given 3 points A, B and C):

  • Construct X such that XA is parallel to BC
  • Construct X such that XA is perpendicular to BC
  • Construct X such that XA=BC

There’s sets of actions for 4 points, for 2 points, actions that just use the 3 points and create figures such as (isosceles, equilateral) triangles, circles, parallelograms. etc.

With such actions one can start out with 2 random points on a plane to create figures of arbitrary complexity. They used this to generate millions of figures.

They then used their DD+AR symbolic engine to recursively and exhaustively deduce a set of all possible premises based on that figure. Once they had this set, they could select one of these premises as a conclusion and trace back through the set of all those other premises to find those which were used to prove that conclusion.

With this done they had a data item which included a figure, premises derived from that figure, proof steps and conclusion based on that figure or ([figure], premises, proof steps, conclusion) or as the paper uses (premises, conclusion, proof steps). This could be transformed into a text sequence of <premises> <conclusion> <proof steps>. They generated 100M of these (premises, conclusion, proof steps) text sequences

They then trained their LLM to input premises and conclusions as a prompt to generate proof steps as a result. As trained, the LLM would accept premises and conclusion and generate additional proof steps.

The challenge with geometry and other mathematical domains is that one often has to add auxiliary constructions (lines, points, angles, etc.) to prove some theory about a figure.

(Auxiliary constructions in Red)

The team at DeepMind were able to take all the 100M <premises> <conclusion> <proof steps> they had and select only those that involved auxiliary constructions in their proof steps. This came down to 9M text sequences which they used to fine tune the LLM so that it could be used to generate possible auxiliary constructions for any figure and theorem

AlphaGeometry in action

The combination of (DD+AR) and trained LLM (for auxiliary constructions) is AlphaGeometry.

AlphaGeometry’s proof process looks like this:

  • Take the problem statement (figure, conclusion [theorem to prove]),
  • Generate all possible premises from that figure.
  • If it has come up with the conclusion (theorem to prove), trace back and generate the proof steps,
  • If not, use the LLM to add an auxiliary construction to the figure and recurse.

In reality AlphaGeometry generates up to 512 of the best auxiliary constructions (out of an infinite set) for the current figure and uses each of these 512 new figures to do an exhaustive premise generation (via DD+AR) and see if any of these solves the problem statement.

Please read the Nature article for more information on AlphaGeometry.

~~~~

IMHO what’s new here is their use of synthetic data to generate millions of new training datums, fine tuning their LLM to produce auxiliary constructions, combining the use of DD and AR in their symbolic engine and then using both the DD+AR and the LLM to prove the theorem.

But what’s even more important here is that a combination of methods such as a symbolic engine and LLM points the way forward to create domain specific intelligent agents. One supposes, with enough intelligent agents, that could be combined to work in tandem, one could construct an AGI ensemble that masters a number of domains.

Picture Credit(s):

open source AGI or not – AGI part 8

Read a recent article in the NY Times, An industry insider drives an open alternative to big tech’s AI, about the Allen Institute for AI releasing a massive corpus of data, Dolma: 3 Trillion Token Open Corpus for Language Model Pre-trainning, that can be used to train LLM’s, available to be downloaded from HuggingFace.

The intent of the data release is to at some point, end up supplying an open source alternative to closed source Google/OpenAI LLMs and a more fully opened source LLM than Meta’s Llama 2, that the world’s research community can use to understand, de-risk and further AI and ultimately AGI development.

We’ve written about AGI before (see our latest, One agent to rule them all – AGI part 7, which has links to parts 1-6 of our AGI posts). Needless to say it’s a very interesting topic to me and should be to the rest of humankind. LLM is a significant step towards AGI IMHO.

One of the Allen Institute for AI’s (AI2) major goals is to open source an LLM (see Announcing AI2 OLMo, an Open Language Model Made by Scientists for Scientists), including the data (Dolma), the model, it’s weight, the training tools/code, the evaluation tools/code, and everything else that went into creating their OLMo (Open Language Model) LLM.

This way the world’s research community can see how it was created and perhaps help in insuring it’s a good (whatever that means) LLM. Releasing Dolma is a first step towards a truly open source LLM.

The Dolma corpus

AI2 has released a report on the contents of Dolma (dolma-datasheet.pdf) which documents much of what went into creating the corpus.

The datasheet goes into a good level of detail into where the corpus data came from and how each data segment is licensed and other metadata to allow researchers the ability to understand its content.

For example, in the Common Crawl data, they have included all of the websites URL as identifiers and for The Stack data the names of GitHub repo used are included in the data’s metadata.

In addition, the Dolma corpus is released under an AI2 ImpACT license as a medium risk artifact, which requires disclosure for use (download). Medium risk ImpACT licensing means that you cannot re-distribute (externally) any copy of the corpus but you may distribute any derivatives of the corpus with “Flow down use restrictions”, “Attribution” and “Notices”.

Which seems to say you can do an awful lot with the corpus and still be within its license restrictions. They do require an Derivative Impact Report to be filed which is sort of a model card for the corpus derivative you have created.

What’s this got to do with AGI

All that being said, the path to AGI is still uncertain. But the textual abilities of recent LLM releases seems to be getting closer and closer to something that approaches human skill in creating text, code, interactive agents, etc. Yes, this may be just one “slim” domain of human intelligence, but textual skills, when and if perfected, can be applied to much that white collar workers do these these days, at least online.

A good text LLM would potentially put many of our jobs at risk but could also possibly open up a much more productive, online workforce, able to assimilate massive amounts of information, and supply correct-current-vetted answers to any query.

The elephant in the room

But all that begs the real question behind AI2’s open sourcing OLMo, which is how do we humans create a safe, effective AGI that can benefit all of mankind rather than any one organization or nation. One that can be used safely by everyone to do whatever is needed to make the world a better society for all.

Versus, some artificial intelligent monstrosity, that sees humankind or any segment of them as an enemy, to whatever it believe needs to be done, and eliminates us or worse, ignores us as irrelevant.

I’m of the opinion that the only way to create a safe and effective AGI for the world is to use an open source approach to create many (competing) AGIs. There are a number of benefits to this as I see it. With a truly open source AGI,

  • Any organization (with sufficient training resources) can have access to their personally trained AGI, which means no one organization or nation can gain the lions share of benefits from AGI.
  • Would allow the creation and deployment of many competing AGI’s which should help limit and check any one of them from doing us or the world any harm. .
  • All of the worlds researchers can contribute to making it as safe as possible.
  • All of the worlds researcher can contribute to making it as multi-culturally, effective and correct as possible.
  • Anyone (with sufficient inferencing resources) can use it for their very own intelligent agent or to work on their very own personal world improvement projects.
  • Many cloud or service provider organizations (with sufficient inferencing resources) could make it available as a service to be used by anyone on an incremental, OPex cost basis.

The risks of a truly open source AGI are also many and include:

  • Any bad actor, nation state, organization, billionaire, etc., could copy the AGI and train it as a weapon to eliminate their enemies or all of humankind, if so inclined.
  • Any bad actors could use it to swamp the internet and world’s media with biased information, disinformation or propaganda.
  • Any good actor or researcher, could, perhaps by mistake, unleash an AGI on an exponentially increasing, self-improvement cycle that could grow beyond our ability to control or to understand.
  • An AGI agent alone, could take it upon itself to eliminate humanity or the world as the best option to save itself

But all these are even more of a problem for closed or semi-open/semi-closed releases of AGIs. As the only organizations with resources to do LLM research are very large tech companies or large technically competent nation states. And all of these are competing across the world stage already.

The resources may still limit widespread use

One item that seems to be in the way of truly widely available AGI is the compute resources needed to train or to use one for inferencing. OpenAI has Microsoft and other select organizations funding their compute, Meta and Google have all their advertising revenue funding theirs.

AI2 seems to have access (and looking for more funding for even more access) to the EU’s LUMI (HPE Cray system using AMD EPYC CPUs and AMD Instinct GPUs) supercomputer, located in CSC data center in Finland and is currently the EU’s fastest supercomputer at 375 CPU PFlops/550 GPU PFlops (~1.5M laptops).

Not many organizations, let alone nations could afford this level of compute.

But the funny thing is that compute doubles (flops/$) every 2 years or so. So, in six years or so, an equivalent of LUMI’s compute power would only require 150K current laptops and after another six years or so, 15K laptops. At some point, ~18 years from now, one would only need ~1.5K laptops, or something any nation or organization could probably afford. Add another 15 years and we are down to under 3 laptops, which just about anyone with a family in the modern world could afford. So in ~33 years or ~2054, any of us could train an LLM on our families compute resources. And that’s just the training compute..

My guess, something like 10-100X less compute resources would be required to use it for inferencing. So that’s probably available for any organization to use right now or if not now, in 6 years or so.

~~~

I can’t wait until I can have my very own AGI to use to write RayOnStorage current-correct-vetted blog posts for me…

Comments?

Picture credit(s):

AI benchmark for Storage, MLpERF Storage

MLperf released their first round of storage benchmark submissions early this month. There’s plenty of interest how much storage is required to keep GPUs busy for AI work. As a result, MLperf has been busy at work with storage vendors to create a benchmark suitable to compare storage systems under a “simulated” AI workload.

For the v0.5 version ,they have released two simulated DNN training workloads one for image segmentation (3D-Unet [146 MB/sample]) and the other for BERT NLP (2.5 KB/sample).

The GPU being simulated is a NVIDIA V100. What they showing with their benchmark is a compute system (with GPUs) reading data directly from a storage system.

By using simulated (GPU) compute, the benchmark doesn’t need physical GPU hardware to run. However, the veracity of the benchmark is somewhat harder to depend on.

But, if one considers, the reported benchmark metric, # supported V100s, as a relative number across the storage submissions, one is on more solid footing. Using it as a real number of V100s that could be physically supported is perhaps invalid.

The other constraint from the benchmark was keeping the simulated (V100) GPUs at 90% busy. MLperf storage benchmark reports, number of samples/second,MBPS metrics as well as # simulated (V100) GPUs supported (@90% utilization).

In the bar chart we show the top 10 # of simulated V100 GPUs for image segmentation storage submissions, DDN AI400X2 had 5 submissions in this category.

The interesting comparison is probably between DDN’s #1 and #3 submission.

  • The #1 submission had smaller amount of data (24X3.5TB = 64TB of flash), used 200Gbps InfiniBand, with 16 compute nodes and supported 160 simulated V100s.
  • The #3 submission had more data (24X13.9TB=259TB of flash),used 400Gbps InfiniBand with 1 compute node and supports only 40 simulated V100s

It’s not clear why the same storage, with less flash storage, and slower interfaces would support 4X the simulated GPUs than the same storage, with more flash storage and faster interfaces.

I can only conclude that the number of compute nodes makes a significant difference in simulated GPUs supported.

One can see a similar example of this phenomenon with Nutanix #2 and #6 submissions above. Here the exact same storage was used for two submissions, one with 5 compute nodes and the other with just 1 but the one with more compute nodes supported 5X the # of simulated V100 GPUs.

Lucky for us, the #3-#10 submissions in the above chart, all used one compute node and as such are more directly comparable.

So, if we take #3-#5 in the chart above, as the top 3 submissions (using 1 compute node), we can see that the #3 DDN AI400X2 could support 40 simulated V100s, the #4 Weka IO storage cluster could support 20 simulated V100s and the #5 Micron NVMe SSD could support 17 simulated V100s.

The Micron SSD used an NVMe (PCIe Gen4) interface while the other two storage systems used 400Gbps InfiniBand and 100Gbps Ethernet, respectively. This tells us that interface speed, while it may matter at some point, doesn’t play a significant role in determining the # simulated V100s.

Both the DDN AI4000X2 and Weka IO storage systems are sophisticated storage systems that support many protocols for file access. Presumably the Micron SSD local storage was directly mapped to a Linux file system.

The only other MLperf storage benchmark that had submissions was for BERT, a natural language model.

In the chart, we show the # of simulated V100 GPUs on the vertical axis. We see the same impact here of having multiple compute nodes in the #1 DDN solution supporting 160 simulated V100s. But in this case, all the remaining systems, used 1 compute node.

Comparing the #2-4 BERT submissions, both the #2 and #4 are DDN AI400X2 storage systems. The #2 system had faster interfaces and more data storage than the #4 system and supported 40 simulated GPUs vs the other only supporting 10 simulated V100s.

Once again, Weka IO storage system came in at #3 (2nd place in the 1 compute node systems) and supported 24 simulated V100s.

A couple of suggestions for MLperf:

  • There should be different classes of submissions one class for only 1 compute node and the other for any number of compute nodes.
  • I would up level the simulated GPU configurations to A100 rather than V100s, which would only be one generation behind best in class GPUs.
  • I would include a standard definition for a compute node. I believe these were all the same, but if the number of compute nodes can have a bearing on the number of V100s supported, the compute node hardware/software should be locked down across submissions.
  • We assume that the protocol used to access the storage oven InfiniBand or Ethernet was standard NFS protocols and not something like GPUDirect storage or other RDMA variants. As the GPUs were simulated this is probably correct but if not, it should be specfied
  • I would describe the storage configurations with more detail, especially for software defined storage systems. Storage nodes for these systems can vary significantly in storage as well as compute cores/memory sizes which can have a significant bearing on storage throughput.

To their credit this is MLperfs first report on their new Storage benchmark and I like what I see here. With the information provided, one can at least start to see some true comparisons of storage systems under AI workloads.

In addition to the new MLperf storage benchmark, MLperf released new inferencing benchmarks which included updates to older benchmark NN models as well as a brand new GPT-J inferencing benchmark. I’ll report on these next time.

~~~~

Comments?

One agent to rule them all, Deepmind’s Gato – AGI part 7

I was perusing Deepmind’s mountain of research today and ran across one article on their Gato agent (A Generalist Agent abstract, paper pdf). These days with Llama 2, GPT-4 and all the other LLM’s doing code, chatbots, image generation, etc. it seems generalist agents are everywhere. But that’s not quite right.

Gato can not only generate text from prompts, but can also control a robot arm for pick and place, caption images, navigate in 3D, play Atari and other (shooter) video games, etc. all with the same exact model architecture and the same exact NN weights with no transfer learning required.

Same weights/same model is very unusual for generalist agents. Historically, generalist agents were all specifically trained on each domain and each resultant model had distinct weights even if they used the same model architecture. For Deepmind, to train Gato and use the same model/same weights for multiple domains is a significant advance.

Gato has achieved significant success in multiple domains. See chart below. However, complete success is still a bit out of reach but they are making progress.

For instance, in the chart one can see that their are over 200 tasks in the DM Lab arena that the model is trained to perform and Gato’s mean performance for ~180 of them is above a (100%) expert level. I believe DM Lab stands for Deepmind Lab and is described as a (multiplayer, first person shooter) 3D video game built on top of Quake III arena.

Deepmind stated that the mean for each task in any domain was taken over 50 distinct iterations of the same task. Gato performs, on average, 450 out of 604 “control” tasks at better than 50% human expert level. Please note, Gato does a lot more than just “control tasks”.

Model size and RT robotic control

One thing I found interesting is that they kept the model size down to 1.2B parameters so that it can perform real time inferencing in controlling robot arms. Over time as hardware speed increases, they believe they should be able train larger models and still retain real-time control. But at the moment, with a 1.2B model it can still provide. real time inferencing.

In order to understand model size vs. expertise they used 3 different model sizes training on same data, 79M, 364M and 1.2B parameters. As can be seen on the above chart, the models did suffer in performance as they got smaller. (Unclear to me what “Tokens Processed” on the X axis actually mean other than data length trained with.) However, it seems to imply, that with similar data, bigger models performed better and the largest did 10 to 20% better than the smallest model trained with same data streams.

Examples of Gato in action

The robot they used to train for was a “Sawyer robot arm with 3-DoF cartesian velocity control, an additional DoF for velocity, and a discrete gripper action.” It seemed a very flexible robot arm that would be used in standard factory environments. One robot task was to stack different styles and colors of plastic blocks.

Deepmind says that Gato provides rudimentary dialogue generation and picture captioning capabilities. Looking at the chat streams persented, seems more than rudimentary to me.

Deepmind did try the (smaller) model on some tasks that it was not originally trained on and it seemed to perform well after “fine-tuning” on the task. In most cases, using fine-tuning of the original model, with just “same domain” (task specific) data, the finely tuned model achieved similar results to what it achieved if Gato was trained from scratch with all the data used in the original model PLUS that specific domain’s data.

Data and tokenization used to train Gato

Deepmind is known for their leading edge research in RL but Gato’s deep neural net model is all trained with supervised learning using transformer techniques. While text based transformer type learning is pervasive in LLM today, vast web class data sets on 3D shooter gaming, robotic block stacking, image captioning and others aren’t nearly as widely available. Below they list the data sets Deepmind used to train Gato.

One key to how they could train a single transformer NN model to do all this, is that they normalized ALL the different types of data above into flat arrays of tokens.

  • Text was encoded into one of 32K subwords and was represented by integers from 0 to 32K. Text is presented to the model in word order
  • Images were transformed into 16×16 pixel patches in rastor order. Each pixel is normalized -1,1.
  • Other discrete values (e.g. Atari button pushes) are flattened into sequences of integers and presented to the model in row major order.
  • Continuous values (robot arm joint torques) are 1st flattened into sequences of floats in row major order and then mu-law encoded into the range -1,1 and then discretized into one of 1024 bins.

After tokenization, the data streams are converted into embeddings. Much more information on the tokenization and embedding process used in the model is available in the paper.

One can see the token count of the training data above. Like other LLMs, transformers take a token stream and randomly zero one out and are trained to guess that correct token in sequence.

~~~~

The paper (see link above and below) has a lot more to say about the control and non-control domains and the data used in training/fine-tuning Gato, if you’re interested. They also have a lengthy section on risks and challenges present in models of this type.

My concern is that as generalist models become more pervasive and as they are trained to work in more domains, the difference between an true AGI agent and a Generalist agent starts to blur.

Something like Gato that can both work in real world (via robotics) and perform meta analysis (like in metaworld), play 1st person shooter games, and analyze 2D and 3D images, all at near expert levels, and oh, support real time inferencing, seems to not that far away from something that could be used as a killer robot in an army of the future and this is just where Gato is today.

One thing I note is that the model is not being made generally available outside of Google Deepmind. And IMHO, that for now is a good thing.

That is until some bad actor gets their hands on it….

Picture Credit(s):

All images, charts, and tables are from “A Generalist Agent” paper

MLperf results show H100 v A100 and v Habana Gaudi2 GPUs

MLCommons recently released new MLperf data center training results. The headlines for the relaese was that they added new GPT-3 data center training results but what I found more interesting was there was a plethora of H100 and A100 results on the same training runs which allowed me to compare the two NVIDIA GPUs in performance.

For example, in ResNet 50 (Image recognition) model training there were a number of H100 and A100 results from Dell. Two of which used the same Intel CPU counts and same H100/A100 GPU counts.

Above we show the top 10 ResNet 50 results and if you examine the #6 submission, it’s a Dell result with 4 Intel Platinum CPUs and 16 NVIDIA H100-SXM5-80GB GPUs which trained ResNet 50 model in 7.8 minutes.

What’s not on that chart is another Dell submission (#16) that also had 4 Intel Platinum CPUs but used 16 NVIDIA A100-SXM-80GB GPUs, which trained the same model in 14.4 minutes.

For ResNet 50 then the H100 is 1.8X faster than a similarly configured A100.

We show above results for Image Segmentation model training top 10. In this case there were two similar Dell submissions, at #3 and #4, in the top 10. These had similar hardware configuration but used H100 or A100 GPUs

These Dell two Image Segmentation (3D-Unet) model training result submissions of 7.6 minutes and 11.0 minutes, respectively means that for Image Segmentation, the H100 is 1.4X faster than the A100.

Finally, for DLRM Recommendation engine training results, there were two other Dell submissions (#5 & #7) that used 2 Intel Platinum CPUs and 8 (H100-SXM5-80GB and A100-SXM-80GB) GPUs and trained in 4.3 and 8.4 minutes, respectively. This says for the DLRM model training the H100 is 2.0X faster than the A100 for DLRM model tracing.

There were other comparisons (that didn’t attain top training results) with with 2 Intel Platinum CPUs and 8 (H100 and A100) GPUs for other model results, which show the H100 is anywhere from 1.7X faster to 2.1X faster.

Unclear why the H100 GPUs perform relatively better with fewer GPUs in the configuration but there may be some additional overhead involved in supporting more CPUs and GPUs which reduces their relative performance.

As a result, we can report from recent MLperf data center training results show for 4 CPUs and 16 (H100 or A100) GPUs the H100 performed 1.4X to 1.8X faster than the A100 and for 2 CPUs and 8 (H100 & A100) GPUs the H100 performed 1.7X two 2.1X faster than the A100.

There was one other interesting GPU comparison shown in recent MLperf results, that between the NVIDIA H100-SXM5-80GB and the Intel Habana Gaudi2 GPU. In this case the submissions involved different vendors (Dell and Intel) and different AI frameworks NGC MXNet 23.04, NGC Pytorch 23.04, NGC HugeCTR 23.04 for the H100 and PyTorch 1.13.1a0 for the Habana Gaudi2. For both submissions they used 2 Intel Platinum CPUs and 8 (H100 or Habana Gaudi1) GPUs.

Again, none of these (H100 vs Habana Guidi2 GPU) results appear in the top result charts we show here.

For ResNet 50 The H100 GPU trained ResNet 50 ins 13.5 min and the Habana Gaudii2 GPU trained ResNet 50 in 16.5 min. This would say the H100 is 1.2X faster than the Habana Guidi2 GPU.

In addition, both of these submissions also trained against the image segmentation model. The H100 trained the image segmentation model in 12.2 minutes while the Habana Guidi2 trained in 20.5 minutes. This would say that the H100 is 1.7X faster than the Habana Gaudi2 GPU.

As a result, recent MLperf data center training results show the NVIDIA H100-SXM5-80GB is 1.2 to 1.7X faster than the Intel Habana Guadi2 GPU on the 2 different model training esults with similar hardware configurations

Finally, MLperf results for GPT-3 are brand new for this release, so we present them below.

There were only 4 (on prem) submissions for GPT-3 in this round. And the #1 NVIDIA with 192 CPUs and 768 H100-SXM5-80GB GPUS trained in 44.8 minutes while the #4 Intel submission with 64 CPUs and 256 Habana Gaudi2 GPUs trained in 442.6 min, respectively.

It’s less certain whether we should compare GPU speeds here as 1) the comparison (#1 to #3 and #2 to #4) used 1/2 the hardware and 2) the software frameworks were very dissimilar, the (#1 & #2) NVIDIA H100 GPT-3 submissions used the NVIDIA NeMo software framework and the Intel (#3 AND #4) submissions used PyTorch 1.13.1a0. Not sure what NVIDIA NeMo is derived from but it doesn’t seem to be being used in any other model training run for MLperf other than GPT-3.

Comments?

Deepmind does sort

Saw an article today on TNW on DeepMind’s new AI taps games to enhance fundamental algorithms which was discussing a recent Nature paper Faster sorting algorithms discovered using deep reinforcement learning and website, which described AlphaDev.

Google DeepMind’s AlphaDev is a derivative of AlphaZero (follow on from AlphaMu and AlphaGo, the conquerer of Go and other strategy games). AlphaDev uses Deep Reinforcement Learning (DRL) to come up with new computer science algorithms. In the first incarnation, a way to sort (2,3,4 or 5 integers) using X86 instructions.

Sorting has been well explored over the years in computer science (CS, e.g. see Donald E. Knuth’s Volume 3 in The Art of Computer Programming, Sorting and Searching), so when a new more efficient/faster sort algorithm comes out it’s a big deal. Google used to ask job applicants how they would code sort algorithms for specific problems. Successful candidates would intrinsically know all the basic CS sorting algorithms and which one would work best in different circumstances.

Deepmind’s approach to sort

Reading the TNW news article, I couldn’t conceive of the action space involved in the reinforcement learning let alone what the state space would look like. However, as I read the Nature article, DeepMind researchers did a decent job of explaining their DRL approach to developing new basic CS algorithms like sorting.

AlphaDev uses a transformer-like framework and a very limited set of x86 (sort of, encapsulated) instructions with memory/register files and limited it to sorting 2, 3, 4, or 5 integer. Such functionality is at the heart of any sort algorithm and as such, is used a gazillion times over and over again in any sorting task involving a long string of items. I think Alphadev used a form of on-policy RL but can’t be sure.

Looking at the X86 basic instruction cheat sheet, there’s over 30 basic forms for X86 instructions which are then multiplied by type of data (registers, memory, constants, etc. and length of operands) being manipulated.

AlphaDev only used 4 (ok, 9 if you include the conditionals for conditional move and conditional jump) X86 instructions. The instructions were mov<A,B>, cmovX<A,B>, cmp<A,B> and jX<A,B> (where X identify the condition under which a conditional move [cmovX] or jump [jX] would take place). And they only used (full, 64 bit) integers in registers and memory locations.

AlphaDev actions

The types of actions that AlphaDev could take included the following:

  • Add transformation – which added an instruction to the end of the current program
  • Swap transformation – which swapped two instructions in the current program
  • Opcode transformation – which changed the opcode (e.g., instruction such as mov to cmp) of a step in the current program
  • Operand transformation – which changed the operand(s) for an instruction in the current program
  • Instruction transformation – which changed the opcode and operand(s) for some instruction in the current program.

They list in their paper a correctness cost function which at each transformation provides value function (I think) for the RL policy. They experimented with 3 different functions which were: 1) the %correctly placed items; 2) square_root(%correctly placed); and 3)the square_root(number of items – number correctly placed). They discovered that the last worked best.

They also placed some constraints on the code generated (called action pruning rules):

  • Memory locations are always read in incremental order
  • Registers are allocated in incremental order
  • Program cannot compare or conditionally move to memory location
  • Program can only read and write to each memory location once (it seems this would tell the RL algorithm when to end the program)
  • Program can not perform two consecutive compare instructions

AlphaDev states

How they determined the state of the program during each transformation was also different. They used one hot encodings (essentially a bit in a bit map is assigned to every instruction-operand pair) for opcode-operand steps in the current program and appended each encoded step into a single program string. Ditto for the state of the memory and registers (at each instruction presumably?). Both the instruction list and memory-register embeddings thenn fed into a state representation encoder.

This state “representation network” (DNN) generated a “latent representation of the State(t)” (maybe it classified the state into one of N classes). For each latent state (classification), there is another “prediction network” (DNN) that predicts the expected return value (presumably trained on correctness cost function above) for each state action. And between the state and expected return values AlphaDev created a (RL) policy to select the next action to perform.

Presumably they started with current basic CS sort algorithms, and 2-5 random integers in memory and fed this (properly encoded and embedded) in as a starting point. Then the AlphaDev algorithm went to work to improve it.

Do this enough times, with an intelligent approach between exploration (more randomly at first) and policy following (more use of policy later) selection of actions and you too can generate new sorting algorithms.

DeepMind also spent time creating a stochastic solution to sorting that they used to compare agains their AlphaDev DRL approach to see which did better. In the end they found the AlphaDev DRL approach worked faster and better than the stochastic solutions they tried.

DeepMind having conquered sorting did the same for hashing.

Why I think DeepMind’s AlphaDev is better

AlphaDev’s approach could just as easily be applied to any of Donald E. Knuth’s, 4 volume series on The Art of Computer Programming book algorithms.

I believe DeepMind’s approach is much more valuable to programmers (and humanity) than CoPilot, ChatGPT code, AlphaCode (DeepMind’s other code generator) or any other code generation transformers.

IMHO AlphaDev goes to the essence of computer science as it’s been practiced over the last 70 years. Here’s what we know and now let’s try to discover a better way do the work we all have to do. Once, we have discovered a new and better way, report and document them as widely as possible so that any programmers can stand on our shoulders, use our work to do what they need to get done.

If I’m going to apply AI to coding, having it generate better basic CS algorithms is much more fruitful for the programming industry (and I may add, humanity as a whole) than having it generate yet another IOS app code or web site from scratch.

Comments?

Picture Credit(s):

The problem with Robotic AI is … data

The advances made in textual and visual (and now aural) AI have been mind blowing in recent years. But most of this has been brought about via the massive availability of of textual, visual and audio data AND the advancement in hardware acceleration.

Robotics can take readily take advantage of hardware improvements but finding the robotic data needed to train robotic AI is a serious challenge.

Yes simulation environments can help but fidelity (how close simulation is to reality) is always a concern.

To gather the amounts of data needed to train a simple robotic manipulator to grab a screw from a bin is huge problem. In the past the only way to do this was, to create your robot, and have it start to do random screw type grab motions and monitor hat happens. After about a 1000 or 10K of these grabs, the robot would stop working because, gears wear down, grippers come loose, motors less responsive, images get obscured, etc. For robots it’s not as simple as scraping the web for images or downloading all the (english) text in wikipedia and masking select words to generate pseudo supervised learning. .

There’s just no way to do that in robotics without deploying 100s or 1000s or 10,000s of real physical robots (or cars) all instrumented with everything needed to capture data for AI learning in real time and let these devices go out on the world with humans guiding them.

While this might work for properly instrumented fleet of cars that are already useful in their own rights even without automation and humans are more than happy to guide them out on the road. This doesn’t work for other robots, whose usefulness can only be realized after they are AI trained, not before.

Fast-RLAP (RC) car driving learning machine

So I was very interested to see a tweet on FastRLAP (paper: FastRLAP: A System for Learning High-Speed Driving via Deep RL and Autonomous Practicing) which used deep reinforcement learning plus a human guided lap plus autonomous driving to teach an AI model how to drive a small RC model car with a vision system, IMUs and GPS to steer around a house, a racetrack and an office environment.

Ok,I know it still involves taking an instrumented robot and have it actually move around the real world. But, Fast-RLAP accelerates AI learning significantly. Rather than having to take 1000 or 10,000 random laps around a house, it was able to learn how to drive around the course to an expert level very rapidly

They used Fast-RLAP to create a policy that enabled the RC car to drive around 3 indoor circuits, two outdoor circuits and one simulated circuit and in most cases, achieving expert level track times, in typically under 40 minutes.

On the indoor course, vinyl floor, the car learned how to perform drift turns (not sure I know how to do do drift turns). On tight “S” curves, the car learned how to get as close to the proper racing line as possible (something I achieved, rarely, only on motorcycles a long time ago). And all while managing to avoid collisions

The approach seems to be have a human drive the model car slowly around the course, flagging or identifying intermediate way points or checkpoints on the track. During driving the loop, the car would use the direction to the next way point as guidance to where to drive next.

Note the light blue circles are example tracks waypoints, they differ in size and location around each track.

The approach seems to make use of a pre-trained track following DNN, but they stripped the driving dynamics (output layers) and just kept the vision (image) encoder portion to provide a means to encode an image and identify route relevant features (which future routes led to collisions, which routes were of interest to get to your next checkpoint, etc).

I believe they used this pre-trained DNN to supply a set of actions to the RL policy which would select between them to take RC car actions (wheel motor/brake settings, steering settings, etc.) and generate the next RC car state (location, direction to next waypoint, etc.).

They used an initial human guided lap, mentioned above to id way points and possibly to supply data for the first RL policy.

The RL part of the algorithm used off-policy RL learning (the RC car would upload lap data at waypoints to a server, which would periodically go through, select lap states and actions at random and update its RL policy, which would then be downloaded to the RC car in motion, (code: GitHub repo).

The reward function used to drive RL was based on minimizing the time to next way point, collision counts, and stuck counts.

I assume collision counts were instances where the car struck some obstacle but could continue on towards the next way point. Stuck instances were when the car could no longer move in the direction its RL policy told it. The system had a finite state machine that allowed it to get out of stuck points by reversing wheel motor(s) and choosing a random direction for steering.

You can see the effects of the pre-trained vision system in some of the screen shots of what the car was trying to do.

In any case, this is the sort of thinking that needs to go on in robotics in order to create more AI capable robots. That is, not unlike transformer learning, we need to figure out a way to take what’s already available in our world and use it to help generate the real world data needed to train robotic DNN/RL algorithms to do what needs to be done.

Comments?

Picture credits: