I am Anthony Frausto-Robledo, AIA, LEED AP, editor-in-chief at I assemble the monthly INSIDER Xpresso newsletter to help us understand emerging technologies (emTech) and the social forces impacting CAD industries like AEC and manufacturing.

This month we bring you a special feature on the critical landmark tidal shifts taking place in the chip (semiconductor) industry. This is an unusual but important subject for all CAD, BIM and 3D pros as it impacts your future devices, platforms and applications. 

This month. In issue #31! 
  • Starter Course: The Top Five Must-Reads
  • Special Feature: Chip Technology, Geopolitics, and the CAD Industry
  • emTech: Emerging Technologies -- More semiconductor resources and items, plus computational design news, Digital Blue Foam.
  • The Briefing: Biggest CAD Industry News Last Month

Our INSIDER Xpresso newsletter continues to grow its audience. I want to personally thank everyone who has subscribed as we know we ask several more questions than just what is your email. This data is invaluable to our ability to attract top AEC software and hardware company advertisers. 

A Word About Our Sponsor

This month our issue is brought to you by AMD, who recently in the past few years has flown past Intel in CPU performance. Our special feature this month is all about the critical semiconductors that power our devices and power our CAD, BIM and 3D applications. AMD has also new powerful Radeon PRO GPUs that offer real-time hardware-accelerated raytracing technology and several unique industry innovations, and at highly competitive performance over price ratios. 

Starter Course

The Top Five Must-Reads

I've combed the Internet to find some of the most interesting, compelling, or controversial stories that impact AEC and manufacturing industries, and the social and emerging technological forces at play on both. 

1 - AI Disruption: What VCs Are Betting On  This excellent short article notes that PitchBook data indicates that AI deals have reached record levels, with investment totaling 31.6 billion USD in just the latest quarter. There were 11 deals that closed more than 500 million USD.    (Forbes)

Key Quote:   "To be disruptive, you have to believe that AI is going to make 10x better recommendations than what is better today," says Eric Vishria, General Partner at Benchmark. He continues to say that this can likely happen in really complex, high dimensional spaces, where the intermingled factors are very high and quite difficult to find correlations via standard analytical techniques. 

The article notes what kind of industries meet this kind of threshold in multi-dimensional complexity. Software development, Cybersecurity, Construction, Talent Management, and Drug Discovery were the industries mentioned.

(read more here.)


2 - Three Robots have entered a new phase — and Cathie Wood's Ark is betting on it.   This article actually discusses ARK's big stake in Komatsu, a world leader in construction diggers, dump trucks, and dozers used both construction and mines.   (Financial Times)

The Big Catch: partner Komatsu stock became the seventh most heavily weighted stock in ARK's "Autonomous Technology and Robotics ETF" and Wood is betting on this Japanese titan to lead in its autonomous (sans human) construction equipment and ultimately its pathway to the "smart construction" site, where eventually humans will be hard to find. 

A Komatsu digger with the company's KomVision technology can detect human and non-human collision and mitigate the situation. 

The big takeaway here is that robots are leading to a new phase, where focused, semi-automation of equipment like trucks and diggers is taking off with concrete results. Importantly, Wood is focused on Komatsu's AI powered drones. (read more here)


3 -  How architecture firms are using generative design today  This article interviews computational design leaders at key firms like  Zaha Hadid Architects (ZHA), BIG, Outer Labs, 7fold and RK Architects—about the state of generative design in architecture.     (ArchDaily)

Spacemaker AI is an example of a generative design platform. 

The common thread:   The common thread is that "generative design" can become much more mature and has great potential in  architectural practices. Arian Hakimi, a lead designer at Zaha makes a key distinction between generative design and parametric design or parametricism (first coined by Patrik Schumacher in 2008). The latter refers to "a style, design deriving from algorithms used to define shapes and forms." But generative design is an intelligent system set up to resolve problems in design via a series of outputs filtered by computers and humans. 

 (read on for more.)


4 -  Superhuman Raises $75 Million For Its Waitlist-Only Email Productivity App   This is a good article about how Superhuman is so different than most software startups and software companies in general. In full disclosure, this author is a Superhuman user.     (Forbes)

Superhuman is the Silicon Valley wait-list only email program that costs a stunning 30 USD per month. But the average worker spends 3 plus hours per day in email rather than in productivity and production apps. Superhuman promises stunning efficiency gains in email and the app more of less delivers on its promises. This author can attest—I'm a happy Superhuman user! 

Shocking Facts:  The Wait-list only email program reportedly has 450,000 people on its wait list. Its staff of 85 has about half of them working on onboarding and observing new users. While the article reveals that its users spend on average about 3 hours per day in Superhuman—which is the same average for email in general— Superhuman's big claim is that users can cut down their email time by up to 50 percent. 

As a Superhuman user myself, I can attest that this claim is more than possible. I have reduced my daily email by one hour, down to 2 from 3. That's a huge savings. During the onboard process—a 40 minute Zoom call—my onboarder learned about my industry, my apps, and how I use and hate email. Then he quickly tutored me in Superhuman basic keyboard commands. Then, he helped me build automations and power-user shortcuts. 

Why CAD/BIM users may like Superhuman.  The program's killer feature is keyboard shortcuts. But they are smartly developed. And the program even has a type of "command line." Hello AutoCAD users! If you can't remember a keyboard shortcut, you can keyboard shortcut the command line and start typing what you are searching for. 

Email does in fact take away about 2-3 hours of an architect, engineer or construction professional's daily time. A program that can steal back even just 45 minutes a day is a big return on investment. 

Imagine if CAD Companies Worked like Superhuman?  Most folks working in CAD industries only tap a small percentage of a CAD, BIM or 3D tool's true power, much less learn how to customize that tool and their mastery of that tool to their specific needs. But what if CAD companies offered a premium tier concierge-like onboarding service like Superhuman? Imagine the potential time savings! 

 (To read the full article).

5 - The Gulf is counting on its smart cities for a sustainable and profitable future   This story reveals that the Middle East is home to some of the most aggressive smart cities initiatives in the world.   (ComputerWeekly

Big Picture?  The Gulf region is forging ahead with smart city plans, aimed at becoming a trillion-dollar digital economy by 2025. Abu Dhabi's Masdar City was launched in 2008 as the world first sustainable smart district. It helped propel many more smart city plans across the Gulf. Key characteristics of Masdar include: (1) buildings consume 40 percent less energy than conventional buildings, (2) power demand is offset by a 10MW solar power plant on site, (3) 1 MW rooftop solar panels. This article has data on what residents expect of smart cities, including things like the expectation that their smartphone will be their primary channel to access city services.  


Five More Stories

We are skipping the Five More Stories section and the "Member Access—(emTech) Section+" article this month. We will return to a more enhanced version in the near future. 


More (emTech) Below Our Special Feature
Our Sponsor
Special Feature

Chip Technology, Geopolitics, and the CAD Industry

The global semiconductor industry is going through landmark tidal shifts that will impact all software. We review the scene and its implications on the platforms and devices and possible impacts on the CAD industry. 


Intro and Geopolitics

Rising tensions between the west (specifically the United States) and China is ushering in large-scale change in the semiconductor industry. With nearly all leading-edge node semiconductor manufacturing in either Taiwan or South Korea—and not here in the US with Intel in its typical leadership position—the US government has stepped in to assist in a new era of industrial nationalism. In early June of this year, the US Senate passed legislation (the US Innovation and Competition Act, USICA) which includes 52 billion USD of federal funding to accelerate domestic semiconductor research, design, and manufacturing in what is known as the Chips Act for America. 

Considered an issue of national security, semiconductors power nearly every type of digital device and certainly every type of computer system running an operating system, including military systems. The US has traditionally led the world in leading chip design and manufacturing. While design leadership remains in US hands, its national manufacturing champion in Intel has faulted. (see the lower half of article, below). 

On top of security concerns, the global semiconductor industry is behind and unable to meet demand. There is a ripe economic opportunity and every major global economy—from the US, EU, China, and Japan—wants more of the burgeoning action. 

On the back of US government investment in the domestic semiconductor industry, Intel's new CEO Pat Gelsinger announced a 20 billion USD expansion of two new fabs at Intel's Chandler, Arizona facility where Fab 42 is fully operational producing 10nm node chips. 

The new US Chips Act could spur the development of up to 10 new chip manufacturing factories. Intel has promised two new ones in Arizona (see above). Similar EU plans call for self-sufficiency in the design and manufacturing of semiconductors in the EU. In the US, which once had a 37 percent share of semiconductors and microelectronics production in 1990, today it only holds 12 percent share. 

While China aspirationally seeks its self-sufficiency in semiconductors, they lack native companies that can develop and manufacture the equipment, non-wafer materials, and wafer materials used in the manufacturer of semiconductors. The US and EU dominate the critical equipment market with nearly zero equipment makers in Taiwan and a single-digit share in China. What China does have is a growing "fabless" chip design industry. 

"This is a moment of overlap between an old paradigm being slowly replaced by a new paradigm."

The global democratization of semiconductor industry design, development, and manufacturing are altering the possible futures for the computer software industry. This can have significant implications for engineering software in the decade ahead. The once stable "Wintel" based digital economy has largely de-coupled. In December of 2020, Microsoft announced that they were designing their own ARM-based chips for servers and Microsoft Surface devices. The server chips are for the company's own Microsoft Azure Cloud services data centers. This play is largely imitating rival Amazon which designed its own ARM-based chip (Graviton 2) to power its AWS datacenters. 

Microsoft Windows PCs are not the center of computing anymore but rather exist in our presence, much like petro cars amid the EV automobile revolution. While they remain in the CAD industries as our primary equipment, they are supplemented by a rapidly changing landscape of new types of smaller devices. The promises of cyclical economic improvement by the Wintel hegemony have firstly slowed and then rather bluntly collapsed in the past few years with Intel's manufacturing hiccups. (see more on that below).

This is a moment of overlap between an old paradigm being slowly replaced by a new paradigm. 

Moore's Law: Then and Now

Since Intel's founding and the emergence of the x86 CPU chip architecture, Moore's Law has largely held its promise. Specifically, Moore's Law—named after Gordon Moore, an Intel co-founder—says that the number of transistors on microchips doubles every two years. That requires a compound annual growth rate of 41 percent. 

Moore's Law in vivid real data, across decades of chip advancements from Intel, Motorola, ARM, Apple, IBM, and others. It documents very clearly the doubling of chip transistors approximately every two years, which is Moore's Law. That is an annual compound rate of approximately 41 percent. Yet, there is a pattern in the dots above we want to highlight in the next image below. (Image: Wikipedia Commons)

In the 70s and 80s, Intel's chief microprocessor competitors such as Motorola and IBM largely kept the semiconductor industry red hot with advancements. AMD added competitive pressure in the 90s and into this century, and the competition for servers, in particular, led Intel to push for larger and more powerful server CPUs. 

In the years from 1994 - to about 2007, we can see (in the chart) a massive gap between powerful server chips like IBM's Power6, Intel's Itanium 2, and AMD's K10—all basically over 500 million transistors compared to the ARM Cortex A9 with less than 50 million transistors. 

Yet, something changes from 2008 onward as ARM advances at a steeper rate than everyone else in the chip industry. (see the green line in the chart below.)

Moore's Law chart again this time highlighted. The Red line is Intel X86 chip progress. The Green line is ARM chip progress. As we can see from about 2008, ARM progress gets steeper while Intel barely kept pace with Moore's Law. Then by 2016 things for Intel slowed further, while ARM progress again accelerated (steeper line). 

Suddenly, ARM licensee Apple comes out with the A7, a landmark 64-bit SoC with over 1 billion transistors. At this point, Apple's iPhone chip now has more transistors than IBM's Power6 from 2007, when the iPhone was introduced. In just six years, a chip in a phone had more processors than one of the world's most powerful server chips from IBM. 

ARM's ascendance is just one factor in the global semiconductor tidal shift. Intel's missteps in missing Moore's Law is another—and we'll get to that issue in a moment. No other company in the industry signifies the democratization forces in the global semiconductor industry as much as ARM, which licenses its chip designs to anyone who wants to work in the ARM ecosystem

ARM, Apple, and Intel 

In the latter half of the first decade of this century, Advanced RISC Machines (ARM) emerged as the clear leader in powerful processors for mobile devices and equipment. Once one of ARM's major founders, Apple had experience with the RISC-based CPUs from the Newton line and was devoted to ARM chips for its 2007-based iPhone. (see: AppleInsider, "How ARM has already saved Apple — twice," 9 June 2020)  Once the iPhone was out of the gate, ARM's chip development continued to accelerate as the smartphone era had indeed come into being. Today ARM owns the mobile device market on a platform architecture level.

The competitive pressure to deliver more processing power and longer battery life at the same time put ARM chip progress on a steeper progress curve, destined to match and overtake Intel x86 in performance per watt. While Intel has plans to become the market leader once again—as measured in performance per watt, not just absolute performance—at the moment, Apple holds the crown to that leadership. 

The Apple A15 Bionic is Apple's latest SoC powering the upcoming iPhone 13 line. Remarkably, the A15 Bionic doesn't make much progress in CPU performance over the A14. (see: SemiAnalysis "Apple CPU Gains Grind To A Halt And The Future Looks Dim As The Impact From The CPU Engineer Exodus To Nuvia And Rivos Starts To Bleed In," 14 Sep 2021). Apple appears to have been hit by a talent exodus from its semiconductor ranks to both Nuvia and now Rivos—which we discuss below. Still, the A15 Bionic manages to increase the GPU performance over any other smartphone chip in existence (including its own A14) by 50 percent. The A14 Bionic has 15 billion transistors, just 1 billion less than the Apple M1 chip. 

However, Apple is destined to face steep competitive pressure from none other than its ex-chief CPU architect Gerald Williams who left Apple in 2019 to form Nuvia. Qualcomm swiftly acquired Nuvia for 1.4 billion USD with reportedly multiple one billion-plus offers from Qualcomm rivals. 

Apple largely built its world-class semiconductor design team from the ground up after acquiring PA Semi in the spring of 2007. A Forbes story makes the note that Steve Jobs wanted to ensure Apple could differentiate its new iPhone from a raft of new competition. The acquisition was a blow to Intel because they were hoping to convince Apple to build future mobile devices using its Atom processor. Why Intel failed to secure a footing in the smartphone chip market is a larger story best told on another day, but suffice it to say, their failure helped secure ARM's dominance. 

This chart from a Nuvia blog post shows very clearly Apple's current performance per watt leadership, yet envisioned (in blue) Nuvia's planned custom ARM-based Phoenix NUMA chip would exceed Apple. Qualcomm is hoping Gerald Williams and his team will help them beat Apple and take the performance per watt crown. 

While Intel and AMD fought in a post-PowerPC era (Apple moved Macs to Intel in 2005), ARM quietly advanced the ARM chip architecture to squeeze every ounce of computer performance out of every watt. While many smartphone makers used largely unaltered ARM chip designs for their smartphone CPUs, Apple had a special license with ARM to develop customer ARM-based chips with proprietary logic. 

As you can see from the two charts above, (second chart in this article and the chart directly above) the ARM world has caught and surpassed the Intel X86 world. New Intel CEO Pat Gelsinger says Intel will retake the performance per watt crown by 2025. Given recent history and ARM's inherent architectural advantages, that claim seems like a risky bet. If anything, Intel (and AMD also) will face significant new competition for performance per watt from the likes of another break-out new company called Rivos Inc. 

Process Technology—Intel's Faulting

We will discuss Nuvia and Rivos in the next section. What is critical to understand now is how Intel fell behind in performance—not just to the ARM-based chips at Apple but even AMD. 

Intel began to stutter in its leadership at the manufacturing a few years ago, but things took a horrible turn for the worse in the summer of 2020 when Intel announced a significant delay in its next manufacturing milestone. Intel would now not move to 7nm process nodes for several years. This process technology has now been renamed "Intel 4" and is due in 2022 H2. 

From its earliest days, Intel orthodoxy said that it could lead the world in semiconductors if its chip designers could work directly with its manufacturing engineers, something not easily done when working with chip foundries halfway around the world. That was the philosophy but not necessarily the reality. As a case in point, Dutch semiconductor equipment maker ASML—which manufactures incredibly complex lithography systems critical to the production of chips—partnered with Intel back in 2012 in order to develop extreme ultraviolet light (EUV) lithography systems for the next era of tiny chips. But ASML also partnered with Samsung and TSMC over the same technology. Today, TSMC alone is estimated to possess half of the total EUV lithography machines that exist in the world. 

A photo of a final assembly of an ASML EUV lithography machine. These units cost over 150 million USD and can require up to 6 months to install before use. (Image: ASML) 

TSMC's 5nm node-based chips—like Apple's new M1 processor in its new Macs—are entirely reliant on ASML's EUV lithography machines, each of which costs upward of 150 million USD and can take 4-6 months to install before use. Intel's earliest 10nm chips pursued the smaller node using conventional lithography using quad patterning, but it failed. While they have since worked out their 10nm chips, which they are shipping today, ASML's EUV machines come into play when Intel 7nm chips ramps in the near future.  

Intel's chip manufacturing problems may ultimately stem from the attrition that comes from global specialization. Companies that try to do the whole widget themselves succumb to companies that allow other smaller companies to risk the capital to tackle specialized components that are highly competitive. In the mid-80s, Intel abandoned the RAM market because it could not compete with major Japanese rivals that poured massive capital into new factories to produce the world's best random-access memory chips. 

Now at this time, with TSMC and Samsung producing hundreds of million more chips for the smartphone market—a market much more extensive than computers—the Asian chip foundries have more capital and larger valuations. With chip fabs costing dozens of billions to build, the market leaders are obtaining a capacity to ramp new process nodes more quickly than their smaller rivals. In simple terms, they have the money to get started on new process node technology sooner. 

Complicating matters for Intel, its tight-knit relation between chip design and chip manufacturing meant that when the company had trouble back in 2018 with its 10nm node ramp, it didn't have any outside fabs to turn to.  That's because Intel chips are design-optimized for their chip-making tools, which third-party foundries don't own. They couldn't just go to Samsung and say, "make this design for us." 

The decades-long unique advantage Intel held by designing chips with tight linkages between its manufacturing tools only became a curse once chips shrank to levels where electricity started to behave in unexpected ways. The solutions required novel materials and redesigns and pushed Intel into an unprecedented situation. 

Meanwhile, contract chip fabs in Asia worked out such issues more swiftly where standard ARM-based chip designs and custom designs from AMD, NVIDIA, and Apple did not have such linkages between their designs and the tools used to produce such chips. 

AMD's Rising Star

With Intel's unique situation coming to haunt them in the latter years of the last decade, nearly every major rival made significant progress and captured more of the market and outright performance leadership. Intel's primary rival in computer chips, AMD, forged ahead with brilliant new CPU chip designs manufactured in Asia. 

AMD's flagship CPU, the Ryzen 5000 series Ryzen 9, is fabricated by TSMC on a 7nm process and does not yet use EUV. Still, AMD leads the world in absolute best balance between single-core and multi-core chip performance per Geekbench results. It's AMD Ryzen 9 5950X, 16-core CPU boasts average single-core scores of 1689 with a multi-core score of 16,681. While Intel's 11th generation Intel Core i9-11900K boasts slightly better single-core scores (1853), at 8 cores its multi-score score is a long way off. 

In essence, AMD is delivering industry-leading single-core performance with high-multi-core performance to boot. This is the kind of balanced top performance that matters significantly in the CAD industry. 

Nuvia and Rivos—Ex-Apple Startups

Earlier, we noted that Apple leads the world in performance per watt. Its' M1 processor, for example, boasts Geekbench single-core scores of over 1700. By comparison, Intel's 11th generation scores slightly higher than Apple's M1 at 1757 and 1853 for its Intel Core i9-11900KF and Intel Core i9-11900K, respectively. 

However, those chips consume vastly more energy—the M1 has a published TDP of 39 watts. The Intel Core i9-11900K has a TDP rated at 125 watts. For Intel CEO Pat Gelsinger to state, Intel will take the performance per watt crown by 2025 sounds too remarkable to be true. 

And this is why! Intel isn't just competing against serious competition from AMD and Apple. They are also competing against new startups like Nuvia and Rivos. Let's look at why these new chip startups are important. 

As we have written about Nuvia before in an earlier issue of Xpresso, Gerald Williams III was Apple's instrumental chief architect of CPU and SoC at Apple before he left to start Nuvia in 2019. But he reportedly left with 100 engineers from Apple and founded Nuvia with Manu Gulati (former lead SoC Architect at Google) and John Bruno (former systems architect at Google). 

Gerald Williams III, in the middle was Apple's chief CPU architecture and largely responsible for the market-leading A-series custom ARM-based SoC chips that power iPhone and iPad devices.  (Image: Nuvia/Qualcomm). Apple sued Williams almost immediately he left to form Nuvia, on the basis of recruiting Apple engineers while in current employment with Apple. Williams has counter-sued. 

His formable team is now a part of Qualcomm after the company acquired Nuvia in 2020. And while Nuvia initially was aiming at performance per watt leadership for chips in the datacenter with its planned Phoenix CPU, Qualcomm leadership seems to have a different idea. The Nuvia team is reportedly working on the Phoenix technology—which is ARM-based—and use it to compete directly with Apple for mobile devices like tablets, smartphones, and small laptop computers on the Chrome and Windows platforms. 

If Williams' departure from Apple wasn't enough, new chip startup Rivos was also led by an exodus of Apple semiconductor veterans. Rivos Inc. is still in stealth mode and only four months since its formation. Unlike Qualcomm's Nuvia team, Rivos is focused on RISC-V chip platform technology, not ARM platform technology, and is aiming at the datacenter where Nuvia was supposed to be aiming.  

RISC-V is an open specification and open platform, but it is not an open-source processor. Both RISC-V and ARM are based on "reduced instruction-set computing (RISC)" architecture, while Intel X86 has primarily employed "common instruction-set computing (CISC)" architecture through its history. 

Multiple sources on the Internet explain the difference between RISC and CISC processors. Still, a quick explanation that is key to this article is that RISC allows a lower number of processor clock cycles per instruction and a standardized load-store limits model. The big takeaway from this is that RISC aimed at reducing overall clock cycles is superior for power consumption. In other words, RISC is more energy-efficient than CISC. Therefore, it should not be surprising that the RISC-based ARM chips have led the world in mobile device semiconductors where power is everything. 

When Apple first had discussions with PA Semi a few years before the 2007 acquisition, it considered PA Semi chips inside future Mac computers. PA Semi founder Daniel W. Dobberpuhl and his team wished to design an enormously powerful chip based on the PowerPC architecture (RISC) that used little power. Shortly before the acquisition, in February of 2007, PA Semi debuted a 64-bit dual-core microprocessor that was 300 percent more energy efficient than any comparable chip. That chip consumed 5 - 13 watts at 2 GHz. 

This was the team that formed the basis of Apple A-series chips, leading the industry in performance per watt. But now, more of these folks from Apple's semiconductor team are branching out with their chip design startups. This is normal behavior and Apple is reportedly trying to recruit Nuvia engineers. 

Like Nuvia, Rivos may be another serious competitor, not just to Apple but to AMD and Intel. The company aims to be the first high-performance RISC-V core. The new chip startup has garnered many senior CPU architects from Apple, Google, Marvell, Qualcomm, Intel, and AMD.

The bottom line is the semiconductor market is incredibly competitive at a time when chip demand is outpacing supply, at a time when leadership is increasingly global, at at time the core technologies and talent are getting more democratized across geopolitical regions. 

Impacts on the CAD Industry

With all these landmark tidal shifts in the semiconductor industry, once dominated by Intel and much more US-based, this next decade may see the complete upending the Wintel hegemonic structure of the IT industry. Windows itself is being more robustly rewritten for the ARM architecture. 

The main impact on the CAD and 3D software industries comes about from the massive code rewrites necessary for companies to respond to the times. At this present time, ARM hasn't just matched Intel X86, it has critical performance over watt superiority, and that matters with cloud computing as much as mobile computing. Amazon, Microsoft, Google and Apple are all moving towards the ARM-based datacenter because there are better economics. 

"Such a sea change will be punishing for CAD industry incumbents who lack the experience and expertise in multi-platform and multi-device development."

At any moment a chip designer could design an ARM chip and match the larger die size of AMD, for example. What would that performance be like? How would the industry respond? 

Fujitsu's A64FX is an ARM-based chip that powers the world's fastest supercomputer. It is the first ARM chip to implement the use of the ARM Scalable Vector Extension (SVE) instruction set to increase vector lengths from standard 128-bit to 512-bit vectors. If NVIDIA does in fact acquire ARM, what is to stop them from entering the ARM server and desktop CPU market and melding their GPU technologies into ARM SoCs and CPUs? 

In truth, what is keeping someone like Apple from developing a large monster customer ARM chip with technologies like this? Perhaps only scale and return on investment. Perhaps they should partner with Fujitsu or someone else on new larger-die HPC ARM chips?

With some CAD and 3D developers moving their software solutions over to Apple Silicon (ARM-based Apple SoCs), Architosh has learned that the process is involved. We learned from Vectorworks, the leading CAD solution on the Mac, that over 120 dependencies in the code needed to be rewritten from X86 to ARM. Vectorworks had to work these out with third-party developers. Most CAD, BIM, and 3D software include multiple dependencies—from physics engines, digital terrain modeling engines, CFD engines, geometric modeling kernels like Parasolid and ACIS to innumerable rendering and visualization engines. This will be a disruptive process at various levels depending on each applications' legacy dependencies. It provides an opening for newcomers to react and develop more quickly with innovative new CAD industry offerings, like Shapr3D for example. 

If Intel's Pat Gelsinger meets his stated mission of Intel taking the performance per watt crown by 2025, then only minor and slow disruption will occur in the engineering software markets like CAD, BIM, and professional 3D. The heavily Windows-dominated engineering software world will largely remain on Intel X86 codebases, while incumbents slowly deploy newly written ARM-based software applications for the plethora of ARM-based devices that will likely remain dominant even past Gelsinger's 2025 timetable. 

However, if Intel fails to meet its new mission—and this author personally feels that the odds are against it obtaining the performance per watt crown—then it will only be AMD who will hold off an ARM onslaught. Operating systems and software applications will rapidly migrate over to a sea of fastest best-in-class next-gen ARM-based computing devices, which have become ever more critical in the "remote-work" reality of the post-global pandemic context. 

Such a sea change will be punishing for CAD industry incumbents who lack the experience and expertise in multi-platform and multi-device development. These companies will suffer a similar paradox to Intel—when the thing that has allowed you to streamline and succeed now becomes your handicap to adaptation. 


Editor's Note

This We have further reading references related to this article in the Curated EmTech section below. 


Our Sponsor

Curated content: Emerging Technologies and their potential impact on CAD-based industries.


More Semiconductor Ecosystem Resources

The semiconductor industry is going through a landmark tidal shift, as we learn from our Special Feature above. Here are some additional resources that expand on change. 

SemiAnalysis has written an interesting article back in December of 2020 about Apple's A14 Bionic, the chip that just got superseded by Apple's A15 Bionic coming in the new iPhone 13 line. We have already mentioned the A15 Bionic has 15 billion transistors. The A14 Bionic had 11.8 billion, achieved on a 88 mm2 die size. 

The reason I bring up this article up is because it mentions Apple's historical ability to achieve 90+ percent of a process node's theoretical density. In the A14 Bionic, Apple only achieved 78 percent effective transistor density, missing their historical average of 90+ percent. Rather than look at this as Apple's inability to continue with breakthrough semiconductor design, the article suggests the issue is about the slow death of SRAM scaling. Geoffrey Yeap of TSMC claims that a typical SoC chip consists of 60 percent logic, 30 percent SRAM, and 10 percent analog/IO. With each process shrink, SRAM isn't necessarily reducing in size at the same rate as logic. 

A demonstration by TSMC of a 3D stacked die for addressing slowing SRAM scaling. (Image: TSMC) 

This is an interesting tidbit of insight given how the industry is wondering how Apple Silicon chips will include large amounts of onboard memory, pushing beyond the 16GB limit on the M1 today. It turns out TSMC and Samsung are addressing the challenges of slowing SRAM scaling as chips shrink by working with 3D stacked SRAM. Architosh will be pursing this insight further as we ponder the future M1X and M2 chips slated for 2022. And we are especially interested to know how Apple intends to create a chip for its much larger computers the Mac Pro (both sized versions). Will they alter the 60/30/10 proportion logic to make room for more SRAM? Or will they forge a different path for managing memory? 

This article from SemiAnalysis also discusses how Moore's Law is slowly suffocating from SRAM scaling issues. Solutions beyond 3D stacking include NRAM, FeRAM and MRAM

Computational Design News

A new MSc in Computational Design and Digital Fabrication is being offered from the University of Nicosia for the 2021 - 2022 academic year. The degree is offered in conjunction with the University of Innsbruck, Institute for Experimental Architecture Hochbau, in Austria. 

A new Master of Science in Computational Design Practices program has launched at Columbia University with both full-time and part-time programs. Application deadline is 15 January 2022. 

Digital Blue Foam spoke with us recently about updates to its SaaS tool for generative and computational design. DBF is a direct competitor to Autodesk's acquired Spacemaker AI. We will be writing more fully about the key DBF updates but for now we can summarize that the young company is making big headway with key clients, expanding more with universities, and adding additional layers of features into their program.

Digital Blue Foam has several new features in its working SaaS solution for early-stage generative, AI-assisted design for buildings. The program can now rate a neighborhood for "walkability" and tell you the proportion of building uses within a specific walking radius. This is just one of many new features in Digital Blue Foam. 

One key new feature is the ability to calculate a neighborhood's walkability and generate a score in addition to instantly calculate a neighborhood's building use mix. This is useful for early-stage or predesign work where a client is investigating multiple potential building sites for example, or when a building owner is thinking about planned specific uses for their building and comparing it to what already exists in a neighborhood within a short walk.

That's all we have for this section this month, as I took a small but much needed vacation. Next month we will return to a normal and much denser curated emTech section. 


What's Cooking: Future Xpresso Features

Our next issue of Xpresso (#32) will have a few possibilities for its feature. We are currently working on stories around AMD, Autodesk, Vectorworks, and Trimble and they are all excellent and unique content. 
The Briefing

Biggest CAD Industry News Last Month

(the biggest news and features in August)

Feature: SIGGRAPH 2021 — Year of the Metaverse, and Virtual and Global Production (Part 1)  This feature from west coast editor Akiko Ashley delves into everything that was cool at the virtual SIGGRAPH conference this year.   [7-10 min. read]  (Architosh). Recommended for all AEC and pro 3D users.

Feature: SIGGRAPH 2021 — Year of the Metaverse, and Virtual and Global Production (Part 2)  This is the second part of our feature from west coast editor Akiko Ashley on the SIGGRAPH conference.  [7-10 min. read]  (Architosh). Recommended for all AEC and pro 3D users.

Feature: Two Years In — Autodesk's Assistance to Notre-Dame in Paris.  In this special report we talk to Autodesk about how the company got involved in the restoration efforts of this cherished French landmark. The article specifically looks at Revit and BIM as it is deployed on Our Lady of Paris.    [7-10 min. read]  (Architosh). Recommended for all BIM users. 

AMD Brings Radeon PRO W6000 Series GPUs to Mac Pro
AMD recently announced new pro GPUs for CAD, BIM and 3D professionals. Now the company has released a few of these new offerings, including a very highend version for the Mac Pro.   
 [5-8 -min read] (Architosh).  Recommended for all readers of Architosh! Big news item! 

AMD Radeon PRO W6600 Workstation GPU Now Available

An excellent new low-cost mid-range CAD, BIM, 3D workstation GPU from AMD offers hardware-accelerated real-time raytracing performance.    [3-6 min. read]  (Architosh). Recommended real-time ray-tracing rendering professionals. 

Enscape Update Adds Real-Time Rendering and VR Inside Archicad.   
AMD Archicad users gain real-time rendering  and VR with Enscape's integration with Archicad, a leading BIM solution in the EU and globally.     [3-6-min read]  (Architosh).  
Recommended for all Archicad users.

NVIDIA Intros RTX A2000 GPU — New Compact Design
This new GPU is a low-cost second-generation RTX card that offers real-time hardware accelerated raytracing at a low cost point. This card will compete with AMD's new Radeon PRO W6600 at a lower cost but with less VRAM.    [3-6-min read].   (Architosh)  Recommended for real-time ray-tracing rendering professionals.

End Note
Remember you can sign-up for architosh INSIDER Xpresso here -- a unique CAD industry newsletter with a special focus on emergent technologies (emTech) like AI, ML, robotics, 3D printing, AAD, computational design, and smart cities tech.

As we move forward, our format will evolve but will aim to focus on emTech in AEC and MCAD. We welcome your suggestions (

To see Past Issues visit this link here.  (sign-up for the newsletter here)

Anthony Frausto-Robledo, AIA, NCARB, LEED AP

This is a free newsletter and companion publication to 
Architosh is subject to conflicts of interest when we write about CAD/AEC/MCAD/3D software/hardware and other related tech companies in the market. In the interest of disclosure, we encourage readers of this newsletter and the Architosh website to visit our Ethics page where we maintain a full list of Held Securities and discuss Our Disclosures. 

This statement and the intent of this section is consistent with Architosh's Disclosure statement on our Ethics page here.  [This rewritten section deprecates all other instances of this section for past issues of the newsletter.]
Architosh on Facebook
Architosh on Twitter
Architosh on LinkedIn
Architosh Readers Group on LinkedIn
Copyright © Architosh, INSIDER Xpresso, All rights reserved.

Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.