Boulder Future Salon News Bits

Thumbnail
Deep Learning State of the Art 2020 from Lex Fridman. 2019 was the first time it became cool to highlight the limits of deep learning. Deep learning is not able to do the kind of the broad spectrum of tasks that we think of as being artificial intelligence, like common sense reasoning, building knowledge bases, and so on.

2019 had tremendous growth of conference papers and there's a lot of exciting research.

The TensorFlow and PyTorch frameworks have really matured. They've converged towards each other, taking each other's best features. TensorFlow has eager execution, and PyTorch has TensorFlow's graph execution with TorchScript, TensorFlow has deep Keras integration and edge and cloud support with TensorFlow Lite and TensorFlow Serving, with Pytorch catching up with PyTorch Mobile and TPU support.

In reinforcement learning, there's no clear winners. Frameworks from OpenAI, Google, DeepMind.

In language processing, the power of the transformer architecture was demonstrated by BERT. BERT achieved state-of-the-art performance on contextualized word embeddings, sentence classification and sentence pair classification, sentence pair similarity, multiple choice, sentence tagging, and question answering. People used "BERT" in the name of their transformer systems: RoBERTa, DistilBERT, ALBERT. CTRL, Megatron, and OpenAI's GPT-2 are also BERT language models. Megatron from Nvidia improved on the GPT-2 transformer model, incresing the number of parameters to 8.3 billion parameters (24x the size of GPT-2). XLNet from CMU and Google combines the bidirectionality of BERT with a recurrence mechanism, and currently has state-of-the-art performance on 18 NLP tasks including question answering, natural language inference, sentiment analysis, and document ranking. ALBERT from Google Research and Toyota achieves state-of-the-art of performance on 12 NLP tasks but does so with many fewer parameters thanks to a parameter-sharing system.

He (Lex Fridman) has some fun with GPT-2 getting it to generate funny text and tests its reasoning ability (it doesn't have any).

MutliWOZ is state-of-the-art at "dialogue", but deep learning has a long way to go for open conversations. No bot can maintain a conversation for 20 minutes.

The code2seq system models computer code.

2019 was an exciting year for reinforcement learning. OpenAI with the OpenAI Five were finally able to challenge top Dota 2 players. DeepMind took on multi-agent learning with Quake III Arena Capture the Flag. DeepMind took on grandmasters in StarCraft 2. On the imperfect information side, Pluribus took on six-player no-limit Texas Hold'em poker and beat world-class professional players.

In robotics, OpenAI made a robotic hand to manipulate a Rubik's Cube.

The lottery ticket hypothesis is that there are small subnetworks within neural networks that do all the work. By pruning out the rest of the network, you can get a small network that does all the same work.

Neural networks have learned disentangled representations, which means that can "disentangle", for example, floors and objects in an image.

Another new idea is the "double descent" idea, where increasing neural network parameters actually makes the network error worse before it does a "second descent" in the error rate.

Self-driving cars are exciting because it's where regular humans experience AI. There's two approaches: Level 4 with Waymo and Level 2 with Telsa. Waymo can drive cars on regular streets with no test driver, but Tesla has a much large fleet deployed and more miles and more data. Telsa is working on "active learning", where there is no separate training and inference stages but the neural network learns continuously and learns edge cases as soon as they come up. Waymo uses Lidar while Tesla uses vision. Only Tesla does over-the-air updates.

Open question: How hard is driving? How many edge cases does driving have? How much can be learned from simulation?

AI discussed in politics: Andrew Yang. President signed American AI Initiative. Tech leaders brought before government. Recommendation systems based on deep learning systems.

Online deep leaning courses: fast.ai (Jeremy Howard & Rachel Thomas), Stanford CS231n, CS224n, deepleaning.ai (Andrew Ng), Introduction to Reinforcement Learning (David Silver), Spinning Up in Deep RL (Open AI). Over 200 machine learing, NLP, and Python tutorlials.

Deep learning books: Deep Learing by Ian Goodfellow, Yoshua Bengio, and Aaron Courville, Grokking Deep Learing by Andrew Trask, Deep Learning with Python by François Chollet (new version coming out soon).

Q&A by MIT students.

Thumbnail
Businesses based on drones. In real estate, drones can show the scale of a property and nearby features such as lakes. In roofing, drones can act as roofing inspectors, and with thermal cameras, can show where insulation is poor. In security, drones can replace humans walking around a property over and over, and thermal cameras can spot people and animals that shouldn't be around. Drones can track plastic pollution on beaches. Drones can be used in archaeology, flooding prevention, urban planning, and agriculture. Especially agriculture as drones can look at a lot of land from a birds-eye view. Drones can detect different types of soil, detect pests and fungal infestations in crops, detect and count livestock, show where land is and isn't shaded for installation of solar panels, and thermal cameras can spot solar panels that need fixing. Drones can be used to inspect wind turbines for corrosion, inspect mobile phone towers, power lines, and power lines. Drones can clean and de-ice blades of wind turbines. Drones can spray fertilizer and pesticides on crops. Drones can blast pods with tree seeds into fields and reforest tough terrains faster than humans ever could. Drones can be used for recreational fishing, using sonar to locate fish. Drones can be used for lifting materials in the construction industry, building cable net and suspension structures, and spraying cement-like mixtures. Drones can be used for construction in places where it's hard to get heavy machinery. Drones can be used for package delivery and delivery in rural areas without delivery infrastructure. High-flying drones can be used to provide internet.

Thumbnail
"Walmart expands its robotic workforce to 650 additional stores." "The new robots, designed by San Francisco-based Bossa Nova Robotics Inc., join the ranks of Walmart's increasingly automated workforce which also includes devices to scrub floors, unload trucks and gather online-grocery orders. They're part of Chief Executive Officer Doug McMillon's push to reduce costs, improve store performance and gain credibility as a technology innovator as it battles Amazon.com Inc."

Thumbnail
The Charmin robot will deliver you toilet paper when you've run out alrighty then.

Thumbnail
Pudu the robot that helps keep you fed. Hobot the window-cleaning robot. Varram the pet training robot. The Unitree Robots humanoid talking head that tells jokes. Canbot will let you take a picture. Liku the Japanese walking, talking robot doll. And Omron's ping-pong playing robot is back. At CES 2020.

Thumbnail
Ballie is a robot ball that entertains your pets while they're at work, and apparently tells your Roomba to clean up after them when they spill food on the floor. From CES 2020.

Thumbnail
Lovot "the robot pet" at CES 2020. It's cute and warm, adapts to its behavior to you, and has an app where you can... (wait for it...) change its eye color. And it wanders around your house and the app can tell you where it is.

Thumbnail
"The dendritic arms of some human neurons can perform logic operations that once seemed to require whole neural networks." The article starts off going through some of the history, and tells it in a way that does a good job of underscoring a point I constantly make when people make comparisons between AI and the brain to make predictions about AI: we are constantly discovering that brains are more complicated than we previously thought. For example, we once thought there were only a few neurotransmitters, later we found out there's hundreds. We once thought there was one type of brain cell in a layer, later on we found out there's layers within the layers. We once thought the computation was done in the main body of the neuron, now we know the branches feeding in (called dendrites) also perform computation. How much do you want to bet there are many surprises to come?

"'I believe that we're just scratching the surface of what these neurons are really doing,' said Albert Gidon, a postdoctoral fellow at Humboldt University of Berlin and the first author of the paper that presented these findings in Science earlier this month."

Human brains have a much thicker cortex than the brains of other animals. The cortex has multiple layers, and layers 2 and 3 are disproportionately thick. A type of neuron in these layers called pyramidal neurons are more "excitable", meaning they fire more easily, than the corresponding neurons in other animals such as mice. For layer 5 it is the reverse, with the mice neurons being more "excitable."

When looking at these neurons, there's three kinds of "action potentials" that can be seen. An "action potential" is when a cascading ion flow results in an electrical signal getting sent along the surface of the membrane. One of them comes from the cell body itself and is called the "somatic" action potential. Another is called the backpropagating action potential and it goes back up the dendrite. Remember, for a neuron, we think of the dendrites as acting as the "inputs" for the cell, and the axon as acting as the "output". The "output" of one cell connects to the "input" of the next cell at the synapse. Ok, so the third kind of action potential is the dendritic calcium action potential that is the focus of this study. The fact that it uses calcium is significant, as noted in the article, because normally action potentials use sodium and potassium. As an aside, these researchers are braver than me, because, they established that these channels use calcium and not sodium & potassium by testing sodium and calcium channel blockers on them. The sodium channel blocker is called tetrodotoxin and is indeed extremely toxic, about 25 times more toxic than cyanide. No thanks, I'm not going anywhere near that stuff.

Anyway, in mice, these calcium action potentials are seen in layer 5, but in humans, they're in layers 2 and 3. The backpropagating action potential is seen after the main cell's action potential fires, but the dendritic calcium action potential happens before the main cell fires. The amplitude of backpropagating action potentials drops with distance from the main cell, but the amplitude of dendritic calcium action potentials does not change with the distance up the dendrite from the main cell. Complex patterns in the dendritic calcium action potentials can cause the main cell to fire in a delayed manner.

The experimenters got the dendritic calcium action potentials to fire by "injecting" an electrical current into the dendrite. The response of the dendritic calcium action potential to the current is not what you would expect. If the current is too low, nothing happens. As soon as the current crosses a key threshold (called the rheobase, from Greek "rhe" meaning current, and "base" meaning, well, base), the dendritic calcium action potential fires, very strong and fast. As you further increase the current from there, the intensity of the dendritic calcium action potential actually declines -- rather than becoming more intense, it becomes more elongated instead. A current beyond about 1.5x the rheobase results in nothing happening at all.

This is completely different from the sodium-potiassium action potentials of the main cell. With those, as you cross the key threshold (rheobase), the cell fires, but as you continue increasing the intensity of the current stimulating the cell, the response of the cell remains exactly the same -- it fires with exactly the same intensity and duration.

This is where the part about the comparison with AND and XOR gates in computing comes from. If you set your inputs to the right current levels, the main cell will act as an "AND" gate, firing when two (or more) inputs cross some threshold. However, the dendritic calcium action potential will actually fire when one of the inputs crosses some threshold, but will actually not fire if two inputs cross the same threshold. By suppressing the firing of the action potential when the total input current is above some threshold, you actually get an XOR logic gate. An "exclusive OR", or XOR, logic gate is a gate where the output is "on" when either of its inputs are "on" but not both. The "not both" is what differentiates an XOR from a regular OR.

The way we currently design neural networks, since it is know that mathematically, any chain of linear operations can be combined into a single linear operation that does the same thing, in order to get multiple layers to actually do anything useful, a non-linear operation has to be inserted. As it turns out, something as simple as a line with a kink in it around 0 (this is called a "rectified linear" or ReLU) is sufficient to make neural networks work.

It appears that, at least in cortical pyramidal neurons in layers 2 and 3, the neurons themselves combine a non-linear XOR operation in the dendritic calcium action potentials with a linear operation in the main cell's action potentials.

Thumbnail
AutoGluon is an AutoML toolkit for deep learning. It was originally designed for MXNet (Amazon's machine learning framework) but now also works with PyTorch. Works only on Linux with Python 3.6+. It has tutorials for many basic tasks such as image recognition, but I don't see an explanation of how the system works. And I don't know how well it performs. Generally, AutoML systems, which means systems that do automatic neural network architecture selection and hyperparameter tuning, are incredibly resource-intensive.

Thumbnail
"A research team at Google has developed a deep neural network that can make fast, detailed rainfall forecasts. The researchers say their results are a dramatic improvement over previous techniques in two key ways. One is speed. Google says that leading weather forecasting models today take one to three hours to run, making them useless if you want a weather forecast an hour in the future. By contrast, Google says its system can produce results in less than 10 minutes -- including the time to collect data from sensors around the United States."

"If you're thinking about going for a bike ride, for example, you'd be able to look up a minute-by-minute rainfall forecast for your specific route."

The system is based on data from the multi-radar multi-sensor system developed by NOAA National Severe Storms Laboratory that breaks down the US into 1x1 km squares, unlike conventional weather systems that are about 5x5 km. This is fed into a neural network architecture called a U-net which downsamples the input and then upsamples the output going the other way. It doesn't do this for the whole country, it divides the country into 256x256 km blocks. It does not use any physics -- the system is entirely neural networks that learn from training data. The system only predicts 6 hours in the future, and it only predicts one thing: rainfall, which is categorized into 4 brackets: 0-0.1 mm/hour, 0.1-1.0 mm/hour, 1.0-2.5 mm/hour, and 2.5+ mm/hour.

Thumbnail
Boeing culture problems revealed by internal messages and how "a company once driven by engineers became driven by finance". Ok, as some of you already know, how I got into looking at this is, Matt Parker said in one of his talks on mathematical mistakes that, when it came to mistakes, the aviation industry had a "blame the process" culture, while other industries, including the medical industry (he was referring to the UK) have a "blame the person" culture -- meaning they find someone to blame and fire them, or even have them charged with criminal charges, rather than examine their process and find why the process didn't catch the mistake. In the aerospace industry, they have checklist after checklist to make sure, if one person makes a mistake, someone else catches it, and so on.

This was during the time news of the Boeing 737-MAX accidents were in the news, so I decided to use that as a way of having a look into the aviation industry and see how this "blame the process" culture works. I ended up watching 8 hours of Congressional testimony and reading the book Field Guide to 'Human Error'. I've already commented on the Boeing Congressional testimony, and I'll comment at length on the Field Guide to 'Human Error' book at some point in the future, but for now, I'll just say that the Field Guide to 'Human Error' book dispelled my notion that the aviation industry truly has a "blame the process" culture. Almost all the examples in the book come from the aviation industry. At best it could be said that the aviation industry has less of a "blame the person" culture than the rest of society, but it is by no means immune to this human tendency, which I think is probably pretty close to universal.

So recently, Boeing released some internal messages to the FAA, and while I haven't been able to find the messages anywhere online, according to the news reports, they are pretty damning. Furthermore, I found out about an Atlantic article written a few months ago about Boeing's culture. It claims that Boeing in the late 90s experienced a "reverse takeover" of McDonnell Douglas -- what they mean by "reverse takeover" is that Boeing acquired McDonnell Douglas, but after the merger, it was the McDonnell Douglas executives that ended up on top. And, according to the article, this was very consequential for Boeing, because Boeing went from an "engineering" culture to a "finance" culture -- McDonnell Douglas was a very profit-driven culture, while Boeing was a very "tell the engineers not to worry" about finances culture. After the merger, Boeing adopted a cost-cutting, hurry-up approach to building aircraft. The shift was exemplified by the creation of a new corporate headquarters in Chicago, a financial center, far away from Renton (Seattle area) where the airplanes are actually built.

If we assume this analysis is correct, well, Boeing just ousted its CEO, a guy with an engineering background, and put in a finance guy. After the McDonnell Douglas executives Phil Condit and Harry Stonecipher, somehow a Boeing engineer, Dennis Muilenburg, ended up in charge. Those of you who read my comments on the Congressional testimony will recall I noted that Congresspeople repeatedly called for his resignation, but he said he was determined to fix the problems with Boeing's culture that led to the problem and "see it through". And I was asking the question, is it optimal to fire the person (blame the person culture), or keep the person who made the mistakes in place and help them (and others) learn from their mistakes and do better (blame the process culture)?

Well, if the root of the problem with Boeing's culture was being too finance-driven and too little engineering-driven, then replacing an engineering-oriented CEO with a finance-driven CEO is likely to make things worse, no? Let's compare the background of the outgoing CEO, Dennis Muilenburg, with the new CEO, Dave Calhoun, before the two of them became CEO:

Dennis Muilenburg:

- Bachelor's degree in Aerospace Engineering from Iowa State University
- Master's degree in Aeronautics and Astronautics from the University of Washington
- Engineering and engineering management on the Boeing X-32, Boeing's part in the in the Lockheed Martin F-22 Raptor contract, the YAL-1 747 Airborne Laser, the High Speed Civil Transport, and the Condor unmanned reconnaissance aircraft
- Vice president of Boeing Combat Systems division
- President of Boeing Integrated Defense Systems
- Chairman of the board

Dave Calhoun:

- Bachelor's Degree in accounting from Virginia Tech
- Manager at GE overseeing: transportation, aircraft engines, reinsurance, and lighting.
- Vice chairman of GE board
- CEO of private equity firm VNU, rebranded as Nielsen Holdings
- Joined private equity firm Blackstone Group
- Joined board of directors for Caterpillar, Gates Corporation, and Medtronic
- Became director of the board at Boeing

As you can see, the new CEO has zero experience with engineering -- no school training in engineering and no on-the-job training in engineering, either. The closest he gets is management roles at GE where he was a manager of engineers. But he's never been an engineer himself.

Having said that, the CEOs that the Atlantic article blames for transforming Boeing from an engineering-driven culture to a finance-driven culture, Phil Condit and Harry Stonecipher, did have engineering in their backgrounds, so having engineering degrees and job experience may not stop a person from becoming finance-driven, or the Atlantic article may be oversimplifying Boeing's actual culture and history.

So maybe you shouldn't sell your Boeing stock? I'm not predicting another accident (or series of accidents) are in Boeing's future, because, I have no ability to predict that. It just seems to me if the Atlantic's "engineering culture" vs "finance culture" analysis is correct, the odds of future finance-driven engineering failures have just gone up.

Anyway, one other thing. I've got a video here where an aviation industry commentator asked about the recent Boeing messages said they could be used as evidence Boeing employees lied to the FAA, and if that's true, that's lying to a Federal agent, and that's a felony. Furthermore, if someone commits a felony that results in other people dying, that can be prosecuted as murder. He doesn't think it's likely, but it looks like there's at least a theoretical possibility that Boeing employees could get charged with a felony for lying to Federal agents, and subsequently charged with murder based on that.

Thumbnail
Flower-like patterns made by combining microbes. (E. coli + A. baylyi.)

Thumbnail
An evolutionary algorithm designs a 'robot' out of blocks that can either contract or not, in order to achieve some objective function, such as simple linear movement, collective behavior, object manipulation, or object transport. Then, the 'robot' is built in the real world out of actual cells taken from the African clawed frog Xenopus laevis. The biological 'robot' performs the same movements. They call them "xenobots" after Xenopus laevis.

Thumbnail
"In all magnets, every atom is associated with a tiny magnetic moment -- also known as 'spin'. In conventional magnets, like the ones that stick to fridges, all the spins are ordered so that they point in the same direction, resulting in a strong magnetic field. This order is like the way atoms order in a solid material."

"But just as matter can exist in different phases -- solid, liquid and gas -- so too can magnetic substances. The Theory of Quantum Matter (TQM) Unit unit is interested in more unusual magnetic phases called 'spin liquids', which could have uses in quantum computation. In spin liquids, there are competing, or 'frustrated' interactions between the spins, so instead of ordering, the spins continuously fluctuate in direction -- similar to the disorder seen in liquid phases of matter."

"Previously, the TQM unit set out to establish which different types of spin liquid could exist in frustrated pyrochlore magnets. They constructed a phase diagram, which showed how different phases could occur when the spins interacted in different ways as the temperature changed, with their findings published in Physical Review X in 2017."

"The phase diagram produced by the TQM unit, showing all the different magnetic phases that exist in the simplest model on a pyrochlore lattice. Phase III, VI and V are spin liquids. But piecing together the phase diagram and identifying the rules governing the interactions between spins in each phase was an arduous process."

"These magnets are quite literally frustrating. Even the simplest model on a pyrochlore lattice took our team years to solve."

"The Okinawa Institute of Science and Technology (OIST) scientists teamed up with machine learning experts from the University of Munich, led by Professor Lode Pollet, who had developed a 'tensorial kernel' -- a way of representing spin configurations in a computer. The scientists used the tensorial kernel to equip a 'support vector machine', which is able to categorize complex data into different groups."

"The Munich scientists fed the machine a quarter of a million spin configurations generated by the OIST supercomputer simulations of the pyrochlore model. Without any information about which phases were present, the machine successfully managed to reproduce an identical version of the phase diagram."

"When the scientists deciphered the 'decision function' which the machine had constructed to classify different types of spin liquid, they found that the computer had also independently figured out the exact mathematical equations that exemplified each phase -- with the whole process taking a matter of weeks."

Thumbnail
12 people were injected with DMT with EEG electrodes on their heads. N,-N,-dimethyltryptamine, also known as DMT, is a chemical that is active in your brain during REM sleep, and acts as a psychedelic when taken as a drug, producing intense experiences, though short.

What the electroencephalography (EEG) machines recorded was: within a minute of the DMT injection, alpha and beta waves dropped enormously in intensity, while delta, theta, and gamma waves increased somewhat in intensity. In the EEG world, brainwaves are classified into (from lowest to highest): delta waves which are 1-4 Hz, theta waves which are 4-8 Hz, alpha waves which are 8-13 Hz, beta waves which are 13-30 Hz, and gamma waves which are 30-45 Hz.

The complexity of the brain waves (as measured by an algorithm called Lempel-Ziv complexity) also increased tremendously. The intensity of all these changes peaked at about 3-4 minutes in, and gradually declined after that, with the waveforms returning to their pre-DMT levels after about 20 minutes.

The people were also asked about their subjective experiences. People would say things like, "I experienced a different reality or dimension", "I saw geometric patterns", "My sense of size or space was altered", "I felt unusual bodily sensations", "I felt i was displaced from my body", "I experienced elaborate/complex visual images", "My imagination was extremely vivid", "I felt a general sense of gratitude", "The experience had a dreamlike quality", "Things looked strange", "I felt open/sensitive to all emotions (good or bad)", "I experienced the presence of another sentient lifeform", "I experienced a disintegration of my usual sense of self/ego", "My sense of time was altered", "The experience felt more real than this reality", "Sounds influenced things I saw (synesthesia)", "The experience was challenging".

The researchers endeavored to correlate these experiences with changes in brainwaves. The changes in beta waves correlated with everything on the list above, especially altered time perception, synesthesia, and sensing the presence of another sentient lifeform. The changes in alpha waves also correlated with almost everything, especially altered time perception and synesthesia, bodily sensations, and openness to emotion. The changes in theta waves correlated with feelings of disintegration of the ego and the experience of complex imagery and geometric patterns. The changes in the overall complexity of the waveforms correlated with feeling displaced from one's body and altered senses of time and space. The increases in complexity were observed primarily in the occipital electrodes (that is, on the back of the head), while the other changes, especially the alpha wave reductions, were observed in the central channels.

Alpha and beta waves are generally associated with being in a normal wakeful state, while theta and delta waves are associated with REM sleep. The drop in alpha and beta waves is what the researchers think leads to the "DMT breakthrough" experience, which is a term people use to refer to the experience of entering into "another" world and encountering other sentient beings. The researchers note that other studies have shown depressed people tend to have high alpha brainwaves and low delta brainwaves, and DMT's propensity to reverse this may explain its personality-changing effects on depressed people.

One final note. Although the headline of the article says "ayahuasca compound", ayahuasca was not used in this research. DMT was injected directly. The relationship between ayahuasca and DMT is that normally DMT can't be taken orally because a type of enzyme in the digestive tract called monoamine oxidase chops it up, so nothing happens. However, one of the ingredients in ayahuasca, a plant called banisteriopsis caapi, acts as a monoamine oxidase inhibitor, allowing DMT from another ingredient in ayahuasca, a plant called psychotria viridis, both native to South America, to go from the digestive tract into the bloodstream.

Thumbnail
In the southwestern Slovakian settlement of Vráble, built starting around 5,300 BCE, there's a counter-clockwise rotation in the orientation of the houses from 32 degrees (on the compass) to 4 degrees over a 300 year period. Furthermore similar rotations have been seen throughout the Linearbandkeramik excavation area. Magnetic surveys now allow for large scale-surveys of large areas without excavating and carbon dating every house. The Linearbandkeramik (German for "linear pottery culture") is a central European culture from the Neolithic era.

If that isn't weird enough, though, the explanation for this that the researchers suggest is that people were trying to build houses that were exactly parallel with existing houses, but were consistently off by a few imperceptible degrees, but consistently always off in the counterclockwise direction. Over hundreds of years, these imperceptible shifts in the orientations of the houses that were built add up. Furthermore, the reason they suggest for this is that human beings, perhaps similar to how we have hand preferences and most people are right-handed, systematically "privilege" the left side of our field of view. This goes by the weird name of "pseudoneglect", because, well, I guess the theory is if we privilege the left side of our field of view, we neglect the right side. That would explain the "neglect" part of the term, but I don't know where the "pseudo" part comes in. It couldn't be because the people who coined the term were listening to the Australian band Pseudo Echo at the time, because the term pre-dates Pseudo Echo. (The term "pseudoneglect" was coined by Dawn Bowers and Kenneth Heilman in 1980; Pseudo Echo's first album came out in 1982.) But it's apparently a real thing, where, for example, if you bring people into your psychology lab, and give them lines and say, divide the line in half, people tend to mark the middle of the line slightly to the left of the actual center.