Tiny CPU firm claims shocking performance wins in bid to oust Intel and ARM

Nuvia is not a household name for CPU designs, but this startup promises to revolutionize cloud server CPU performance in the coming years. 

According to the company, its Orion server system-on-chips (SoCs) based on its Arm’s Phoenix cores will deliver over two times higher single-thread performance when compared to existing x86 designs, such as AMD’s Zen 2 and Intel’s Sunny Cove using a third of power. 

Nuvia’s promises seem very bold at first, but the company’s founders — John Bruno, Manu Gulati, and Gerard Williams III — have a track record of developing successful processors as well as system architectures at Apple, AMD, Arm, and Google. The company was founded in 2019 with an aim to disrupt the market of cloud server processors by offering tangibly higher performance at a fraction of the power of x86 SoCs. 

Nuvia’s Orion SoCs use proprietary Phoenix cores that are presumably based on the Armv9 architecture, but feature a completely overhauled CPU pipeline as well as certain exclusive enhancements. According to the developer, it wants Orion/Phoenix to deliver the highest possible single-thread performance (at a given per-core wattage) at a consistently high utilization as well as frequency. 

Performance revolutionized?

Modern server processors feature tens or even hundreds of cores, but since thermal design power of CPUs is limited, multi-core monsters with tens of cores hardly ever draw more than 10W per core. According to Nuvia, the sweet spot for server processors lies in sub-5W per core range, which is where Arm can beat x86 in terms of single-thread performance. While x86 cores can scale well above 20W per core, performance of x86 solutions will be “only 40% – 50% faster”, according to Nuvia. 

“The optimal solution is one where a workload finishes in the shortest time possible while consuming the least possible power,” said John Bruno, founder and SVP of system engineering at Nuvia. “Nuvia is designing the Phoenix core to meet these targets. Built around Phoenix is the Orion SoC and hardware infrastructure specified to support peak performance on real cloud workloads without bottlenecks.” 

To prove its points, Nuvia demonstrated performance per watt of various modern CPUs in Geekbench 5. The company compared Apple’s A13, A12Z, AMD’s Ryzen 7 4700U (Zen 2), Intel’s Core i7-1068NG7 (Sunny Cove), Core i7-8750H (Skylake), and Qualcomm Snapdragon 865 SoCs. 

Based on the data demonstrated by Nuvia, its CPU cores can deliver about 50% higher performance in Geekbench 5 (2000 points vs. 1300 points) when compared to AMD’s Zen 2 and Intel’s Sunny Cove at 1/3 of power (4.40W vs. 4.80W). Furthermore, Nuvia says that it does not disclose the whole potential of its cores just now. 

At present, Nuvia demonstrates simulated performance of its Orion SoCs based on Phoenix cores, so it remains to be seen how a real product will work under typical workloads. Meanwhile, Nuvia is condifident that it will retain its single-thread performance per watt leadership and aims to bring the first Orion SoCs to the market over the next 18 months, sometime in late 2021 or early 2022, so the wait should not be too long.

Intel’s Xe Graphics Might Mean You No Longer Need a Separate Graphics Card to Play Games

Intel has finally given actual details on its upcoming Xe GPU architecture that will power its future CPUs with integrated graphics. While Tiger Lake, Intel’s upcoming 11th generation, 10nm mobile processor, will be the first product to leverage Xe, the company also said it was designing a more powerful mobile CPU with Xe integrated graphics (code-named DG1) for creators. During a press event, Raja Koduri, senior vice president, chief architect, and general manager of Architecture, Graphics, and Software at Intel, said it will ship this year, too. Intel is also working on a discrete GPU for enthusiast gamers, the Xe HPG, and yes, it will have hardware dedicated ray tracing. Koduri said it’s scheduled to ship sometime in 2021, but did not say whether or not it would be available in either desktop or mobile (or both) form factors at launch.

For some historical context, Intel’s first generation of GPUs was in its last discrete graphics card circa 1998, and Gen 5 through Gen 11 have been integrated into Intel CPUs for the past decade. (Gen 10 technically never existed though. It was scrapped alongside Intel’s Cannon Lake line of CPUs.) So, these Gen 12 graphics, totally redesigned and coming in both integrated and discrete packages, are kind of a big deal.

At an event earlier this week for press, Intel officially confirmed that its discrete Xe GPU will have ray tracing, but it also said it has improved its integrated graphics to the point where you can play graphically-intensive games at 1080p with a good frame rate. Intel will have more to say about its ray tracing GPU at a later date, as well as what laptop OEMs will have Tiger Lake chips, but it had plenty to share about its Xe graphics architecture and some of the neat, new features it brings with it.

At the event, Intel mostly focused on the capabilities of its Xe LP graphics microarchitecture, which covers will cover the spread between integrated and entry-level graphics, and mid-range graphics. The microarchitecture will “continue to drive the visual experience for PC, mobile, and ultra mobile form-factors,” said David Blythe, Senior Fellow and Director of Graphics Architecture at Intel. The challenge, according to Blythe, was to figure out a way to make a better GPU within the same-size die area of previous integrated GPUs, and it seems like Intel did it.

Intel’s Xe graphics will have up to 48 texels (similar to an image pixel), 96 EUs (execution units), 1536 flops (floating-point calculations), and up to 16MB of L3 cache, plus twice the amount of memory bandwidth. Compared to the current UHD Graphics 630 that now shares space on many Intel CPUs, it’s a decent jump in performance. The current Gen 11 has only 64 EUs, so the new architecture isn’t 50% larger as previous rumours indicated.

Yet despite the smaller pump in EUs that rumoured Intel re-designed the Xe LP microarchitecture from scratch, which means its 12th generation of graphics should not only be the most powerful it has ever built but the first time the company has substantially redesigned its graphics architecture for several generations.

Among many other things, Blythe said Intel’s new architecture will “enable much broader 1080p gameplay” thanks to enhanced power performance and efficiency in Xe, including “previously unachievable gameplay on some triple-A titles” in a mobile form factor with integrated graphics. In other words, gaming with integrated graphics is about to get a whole lot better.

At it stands now, playing something like Far Cry 5 at a 720p resolution and low graphics settings averages 17 frames per second with UHD Graphics 630, according to Tom’s Hardware. Intel’s Core i7-1065G7 Iris Plus Graphics variants fair better, but nowhere near the 1080p/60 fps gold-standard of modern gaming—averaging 27 fps for the 15W version of the CPU, and 37 fps for the 25 W version.

Intel showed off some 1080p gameplay footage, which included PlayerUnknown’s Battlegrounds, Grid, Mount & Blade II: Bannerlord, Doom Eternal, and Battlefield V, and while it was hard to make out the exact fps or graphics setting, the footage was consistently smooth and totally playable. Intel did show a comparison of Grid played on Gen 11 graphics at low quality and Xe graphics at high quality, and the frame rate seemed near-identical, but again, it’s hard to properly assess frame rate and graphics quality of streamed gameplay footage.

Intel also showed a side-by-side comparison of Battlefield V being played on a Gen 11 25W platform compared to Gen 12 Xe LP 15 W, and while the gameplay looked identical, a slowed-down version showed that the Xe LP microarchitecture was better at smoothing out the rolling action of a tank’s caterpillar track while it was in motion. The Gen 11 version, in contrast, was somewhat choppy.

On the software side of things, Xe graphics will also include a new adaptive GPU optimisation, which decides when and how to recompile the shading in any given scene. According to Intel, this is a general driver change, so not only is it a part of the Xe LP architecture, but Intel also plans to enable it for Gen 9 and Gen 11—although Intel did not say when exactly it would be available for Gen 9 and Gen 11 graphics.

Intel Xe also has a new feature called Instant Game Tuning, which is expected to keep games optimised without a full driver install. If you use discrete graphics you’re familiar with how often you need to update drivers to take advantage of all the features in a game. I feel like I’ve downloaded and installed a ton for my Nvidia graphics card in the last few weeks alone. The driver is needed to get the best performance out of your games, and often times is targeted at a specific game. Intel says it’s eliminating that by delivering driver updates through its cloud so users never have to worry about updating drivers themselves ever again.

To quickly add some changes on the media side of things: Intel increased the encode/decode throughput up to twice as much, added AV1 decoder support, HEVC screen content coding support, 4K/8K60 playback, HDR/Dolby Vision playback, and increased fidelity with full 12-bit end-to-end video pipelining. The Xe LP display engine also supports DisplayPort 1.4, HDMI 2.0, USB4 Type-C, and up to a 360 Hz refresh rate and Adaptive Sync.

All this means it seems something good will come out of Intel’s 10nm process being delayed for so long. CPU performance aside, Intel’s integrated graphics seems like it will get a major boost thanks to the Xe architecture. One major question left now is how will Xe LP compare to AMD’s integrated graphics, which have been traditionally superior to Intel’s. Apple is also getting ready to launch its own ARM-based processors with integrated graphics, so that will be another interesting point of comparison. Also, how will Intel’s ray tracing GPU compare to Nvidia’s, or even AMD’s?

So far we know that the custom GPUs powering the Xbox Series X and PlayStation 5 have ray tracing, but we haven’t heard much on the PC front. We know AMD’s next-gen GPUs with will have ray tracing, but no word on a release date yet. Nvidia’s 2nd generation of ray tracing GPUs might be around the corner as well. Still, this is a welcome shake-up to the GPU space that has long been dominated by Nvidia, and potentiality a massive change for the budget laptop space.

CHROMEBOOK USERS CAN’T SIGN IN, HERE’S THE FIX

Chrome OS 84 was a relatively substantial update for Chromebook users and until now, we haven’t seen any major issues or bugs in the software. Unfortunately, a rather nasty glitch has recently popped up and it is preventing users from logging into their Chromebooks. While most of us have misspelled our password and had Chrome OS tell us it is invalid, this new bug is throwing the error even when the password is correct or updated.

When I open my chromebook I try to log in but it tells me the password is invalid or unrecognised, even though the same p/w allows me to open my gmail okay when I go on as a ‘guest’. I’ve tried changing the password once inside my gmail but the chromebook still doesn’t accept it.

Chromebook Support Forum

The forum thread now has dozens of “me too” comments but it appears that Google’s developers have found a workaround to get things back up and running while a more permanent fix is in the works. Keep in mind, these “fixes” do require powerwashing your Chromebook which will result in the loss of any local data you may have saved. If you’re like me, that’s a non-issue. I keep my stuff in the cloud but I understand that the average user doesn’t switch or reset devices all the time like we do. That said, there are two methods to overcome this issue and both of them will require a powerwash. If your content deleting whatever local data you have, you can go with method number one which is powerwashing from the lock screen. Here’s the official walkthrough from the Community Manager.

Powerwash

Powerwashing your device erases all the information on your Chromebook’s hard drive, including all the files in the Downloads folder. For more information, you can visit the Reset your Chromebook to factory settings help center article. If you have all your files backed up to Google Drive or an external storage device, please follow the below steps:

Press and hold Ctrl + Alt + Shift + r.

Select Restart.

In the box that appears, select Powerwash > Continue.

Follow the steps that appear and sign in with your Google Account. Note: The account you sign in with after you reset your Chromebook will be the owner account.

2-day Wake

Save your files by first keeping your Chromebook awake for 48 hours. If you have files in your Downloads folder, files that are not backed up to Google Drive or an external storage device, please follow the below steps: 

Ensure your charger or adapter cables are completely plugged in, both to your Chromebook and the wall. Sign in as a guest on your device that is experiencing this issue. Update your Settings:

On the left, select Device.

Select Power.

Under “When idle,” next to “While charging” and “While on battery,” choose Keep display on.

Under “Sleep when cover is closed,” toggle to off 

Keep your device awake with the lid open for 48 hours without powering off.

Once 48 hours has passed, please exit guest mode and login into your account. Once you login, ensure you have backed up all of your files: 

Press Ctrl + s.

At the bottom, enter a name for your file.

In the left column, choose where you’d like to save your file, such as Google Drive.

Select Save.

Factory reset your Chromebook and ensure your Chromebook’s operating system has been updated to the latest version.

SHOP BEST CHROMEBOOKS OF 2020 SO FAR ON CHROME SHOP

This fix feels a bit hacky for sure but my guess is that this is the quickest, least-intrusive way for users to get back on track while developers get the final fix pushed out. Seeing that step two points out that you should ensure your Chromebook is updated tells me that the fix may already be in place and you just have to reset to implement it. It is a bit inconvenient to leave your Chromebook in the awake state for 48 hours but it’s better than the alternative of losing important files. If you follow these procedures and are still having issues, developer can use your feedback. To send feedback and share system info, press Alt+Shift+i or press and hold your power button and click feedback. In the description, included #M84Login so the report will get filtered quickly to the proper developers. Chrome OS devs take login issues very seriously and this particular bug was given the highest level of priority because of that very reason.

This cheap AMD Ryzen mini PC has two unique features that all vendors should copy

Beelink is a known quantity to our readers; we’ve reviewed quite a few of its devices over the past few years – including the T4, M1, L55 and A1 – all of which show this up-and-coming challenger is not afraid to innovate.

The Beelink GTR Pro, a mini PC built around the AMD Ryzen 3550H CPU, may well be its most ambitious piece of kit to date.

There are a few of very good reasons Beelink chose this particular processor. It has a high base frequency (2.1GHz), a relatively low TDP (35W) and an 8-core Radeon Vega 8 GPU with Freesync support.

Despite its relatively small volume (168 x 120 x 47mm) and entry-level price tag (less than $350), the GTR Pro packs two exciting features that all PC manufacturers should copy.

First, there’s a standalone fingerprint reader – an excellent feature for anyone after a business PC. Second, Beelink engineers have managed to embed Samsung’s Dex functionality, which allows a compatible smartphone to connect to the display and be controlled by the input peripherals. It’s essentially an embedded KVM and could potentially be used to control other computers too (although we haven’t yet checked).

The rest of the spec list shows that the Beelink GTR Pro really is…the bee’s knees. It supports Wi-Fi 6 and has two GbE connectors, two 4K-capable HDMI ports, one DisplayPort, one USB Type-C with DP Alt Mode connectivity, two M2 SSD slots, a pair of microphones, and two DDR4 memory slots. All in all, that’s a total of almost 20 ports.

There’s even a CLR CMOS button that allows you to revert to default settings, just because.

Bear in mind

While global delivery is available, if the Beelink GTR Pro is shipped from the Chinese warehouse, it may take a month to arrive in either the UK or US (and potentially more). You may be levied a tax either directly or through the courier.

If you’ve managed to get hold of a cheaper product with equivalent specifications, in stock and brand new, let us know and we’ll tip our hat to you.

Intel’s First High-End Xe-HPG GPU Powered Discrete Gaming Graphics Cards Launching in 2021, Will Feature Hardware-Accelerated Ray-Tracing & Lots of Cores

Intel is all set to launch its first discrete gaming graphics cards based on the Xe GPU architecture in 2021. Aimed at the enthusiast gaming market, the next-generation Intel Xe discrete graphics cards will feature all the goodies that one can expect within a modern gaming-oriented GPU such as ray tracing and tons of horsepower for 4K enthusiast tier performance.

Intel’s Xe-HPG GPU Powered Discrete Graphics Cards For Enthusiast Gaming Launching in 2021 – HW Accelerated Ray-Tracing, GDDR6 and Massive Amounts of Cores on 10nm Process

The bomb was dropped by none other than Videocardz who has reported that Intel is planning to launch its first discrete graphics card lineup specifically for the enthusiast gaming segment. It looks like Intel will be directly competing against AMD’s RDNA 2 and NVIDIA’s Ampere lineup which will be launching this year.

According to the leak, Intel’s gaming graphics card lineup will be powered by the Xe-HPG GPU. This specific GPU is another category within the Xe micro-architecture family. It falls between the Xe-LP and Xe-HP and is targeted primarily at the gaming audiences. The Xe-HPG GPU is expected to use a single tile and assuming that a single tile consists of 512 EUs, we are looking at up to 4096 cores on the flagship chip.

Based on our exclusive report and from what Intel has talked about in regards to its Ponte Vecchio chip, it looks like Intel is all onboard the MCM train with each chip consisting of several Xe GPU tiles that will be interconnected together to form a monster of a GPU. Here are the actual EU counts of Intel’s various MCM-based Xe HP GPUs along with estimated core counts and TFLOPs:

Xe HP (12.5) 1-Tile GPU: 512 EU [Est: 4096 Cores, 10.6 TFLOPs 1.3 GHz, 150W]

Xe HP (12.5) 2-Tile GPU: 1024 EUs [Est: 8192 Cores, 21.2 1.3 GHz, TFLOPs, 300W]

Xe HP (12.5) 4-Tile GPU: 2048 EUs [Est: 16,384 Cores, 42.3 TFLOPs 1.3 GHz, 400W/500W]

Of course, Intel can opt for a higher EU count on its Xe-HPG gaming-specific GPUs but that remains to be seen until the final specifications are disclosed. As for the gaming-specific features, the Intel Xe-HPG powered graphics cards will feature hardware-accelerated ray tracing and feature GDDR6 memory to optimize performance and value whereas the Xe-HP lineup which is aimed at the data center market will opt for HBM memory.

The leak also talks about an Intel Xe-LP product known as SG1 which is a graphics card aimed at the server market. Little details are known of this part but it is expected by the end of 2020. Raja Koduri has already teased three massive GPU dies in ‘BFP’ (Big Fabulous Package) flavors which include all three Xe-HP variants. There’s no doubt that one of these chips shares the same design principles as the Xe-HPG which is based on the 1-tile design as the most entry-level Xe-HP part.

All Intel Xe GPUs are expected to be fabricated on the 10nm process node but by an external foundry as the leaked information points. A likely candidate would be Samsung as it’s 10nm process has been in mass production for a while now.

Intel recently gave us the first demo of its Xe LP GPU inside its upcoming Tiger Lake CPUs which is proving to be a major leap in integrated graphics performance for Intel as seen in the demonstration. Expect more information on the Xe HP & Xe HPC GPUs in the coming months.

Apple releases iOS 13.6.1 and iPadOS 13.6.1 for all users

Apple has just released iOS 13.6.1 and iPadOS 13.6.1 for all users, just one month after the release of iOS 13.6. The update fixes some known issues related to the Exposure Notifications and another one that could make the displays exhibit a green tint.

iOS 13.6 was released in July with Car Key, a digital key that will work for locking, unlocking, and starting supported vehicles. The update also introduced Apple News+ Audio with professionally narrated audio stories curated from the Apple News team to your device. And now Apple is rolling out iOS 13.6.1 and iPadOS 13.6.1 for all users.

One noteworthy aspect of today’s update is that it fixes a problem that could cause a notable green tint on the screen of some iPhone models, as reported here in June. According to Apple, this problem was caused by a thermal management failure and is now fixed with iOS 13.6.1.

iOS 13.6.1 and iPadOS 13.6.1 are now available for all Apple devices that are compatible with iOS 13 and iPadOS 13. Users can install the latest update on their iPhone, iPod touch, and iPad through the Software Update menu in the Settings app. Make sure you are connected to a Wi-Fi network with at least 50% of battery power.

Check out the full release notes of iOS 13.6.1 below:

iOS 13.6.1 includes bug fixes for your iPhone.

Addresses an issue where unneeded system data files might not be automatically deleted when available storage is low

Fixes a thermal management issue that caused some displays to exhibit a green tint

Fixes an issue where Exposure Notifications could be disabled for some users

AOC’s new CU34G2X: 34-inch UltraWide 3440×1440 at 144Hz for $450

AOC has just revealed its new CU34G2X gaming monitor, offering a 34-inch 21:9 aspect ratio UltraWide experience with a native 3440 x 1440 resolution.

The 34-inch UltraWide has a super-fast 144Hz refresh rate and 1ms response time, so you should be pretty good for even the fastest shooters on the market. Call of Duty: Modern Warfare and its battle royale mode Warzone would be pretty damn near perfect on the AOC CU34G2X.

AOC taps a VA panel for its CU34G2X monitor, with a curvy 1500R curvature. AMD FreeSync is supported on the monitor, so you might want to grab a Radeon RX 5700 XT or wait for the new RDNA 2-based cards that are coming in the next few months.

Xiaomi’s new transparent OLED is one of the weirdest TVs ever

Xiaomi has its fingers thoroughly in a number of tech pies, and for its tenth anniversary, the Chinese tech giant announced a swathe of new products, including the Mi 10 Ultra flagship smartphone and… a completely transparent TV.

Despite having a pretty considerable novelty factor, the Xiaomi Mi TV LUX OLED Transparent Edition is actually going to be available to consumers in China from August 16, where it’ll cost a cool ¥49,999 (roughly $7,200 / £5,500 / AU$10,000).

Xiaomi’s transparent OLED joins the novelty TV ranks of Samsung’s The Frame…

…the rotating, vertical Samsung Sero TV…

…and of course, Samsung’s mammoth, yet paper-thin, The Wall

As its title (and transcendental product video) suggests, you can see through the TV both when it’s turned off and when it’s displaying content, with Xiaomi claiming that the “images seem to be suspended in the air”.

Outside of its star feature, the Transparent Edition of the Mi TV LUX sports a 55-inch OLED panel, 120Hz refresh rate, 93% DCI-P3 color support, and “150000:1 static contrast ratio and an infinite dynamic contrast ratio”.

Of course, all of these specs are only as impressive as the TV’s environment allows for – the furniture, walls and other objects you place behind it will no doubt have an impact on its clarity, as will the room’s lighting situation.

While there’s no word yet on global availability (and there’s every chance that it won’t leave the Asian market), the fact that a TV with transparent capabilities is at a point where it can be mass-produced is a big step forward for the completely necessary technology.

Huawei’s 24-Core 7nm Kunpeng CPU Allegedly Beats Core i9-9900K In Multi-Core Performance

Chinese news outlet IThome received word that Huawei is on the brink if launching the brand’s new desktop PC (internally known as Pangu) for the domestic market. The system utilizes a variant of the company’s Kunpeng 920, which is also known as the Hi1620. The report claims that the Kunpeng 920 3211K’s multi-core performance is slightly better than the Intel Core i9-9900K Coffee Lake processor.

The Kunpeng 920, which is based on Arm’s Neoverse N1 (codename Ares) microarchitecture, boasts core configurations that span from 24 up to 64 cores, running between 2.4 GHz and 3 GHz. TSMC used to produce Kunpeng 920 for Huawei on its 7nm process node before cutting off all ties with Chinese tech giant due to new U.S. regulations.

The Kunpeng 920 3211K in particular has 24 cores that max out at 2.6 GHz. Huawei pairs the processor with 8GB of SO-DIMM memory, a 512GB Samsung SSD and AMD’s Radeon 520 mobile graphics card.

Huawei tailors the Pangu to government and enterprise markets, meaning the system is equipped with China’s homemade Unified Operating System (UOS). User expansion and customization on the Pangu is close to zero. The Kunpeng 920 3211K is soldered to the motherboard and doesn’t support other graphics cards. The UOS is cemented into the PC so you can’t install Windows on it either. We suspect you may be able to upgrade the memory or SSD, but that’s about it.

The purported images of the Pangu show three USB Type-A ports, one USB Type-C port and a single 3.5mm headphone jack in the front of the case. There is also room for an optical drive. The rear of the case holds four USB Type-A ports, one Ethernet port, three 3.5mm audio jacks and a D-Sub port. IThome’s report states that the Pangu comes with a 23.8-inch monitor with a resolution of 1920 x 1080 and 70% NTSC color gamut.

Pricing and the exact release date for the Pangu is unknown. The IThome reader only insinuated that the Pangu will launch soon.

Chrome for Windows to gain handy Incognito Mode desktop shortcut

The Windows version of Google Chrome is preparing to add a desktop shortcut to open directly into an Incognito window, perfect for whatever you use Incognito Mode for.

Incognito Mode in Google Chrome has more than its fair share of uses, be it hiding certain web pages from your history, bypassing paywalls from larger news sites, or just wanting to do a search without it appearing in your Google Search history. Whatever you use Incognito Mode for, the current best way to open an Incognito window is to first open Chrome, then use the Ctrl-Shift-N shortcut or the “New Incognito window” menu option.

Now it seems Google is preparing a newer, handier way to open Chrome’s Incognito Mode, exclusive to Windows, by allowing you to create a desktop shortcut that opens directly to Incognito. This comes from a pair of Chromium code changes [1, 2], one of which adds a new flag to chrome://flags.

Enable Incognito Desktop Shortcut

Enables users to create a desktop shortcut for incognito mode.

#enable-incognito-shortcut-on-desktop

Digging a bit deeper, we find that this flag will add a new “Create Shortcut” option to Incognito Mode’s profile menu, just as Chrome for Windows already allows each profile to create a desktop shortcut. For now, we’re not sure why this feature is exclusive to Windows, instead of also arriving on Mac and Linux.

As this feature is still early in development and Chrome 86 is less than a month from reaching Beta, we expect that Chrome’s new Incognito desktop shortcut will probably not arrive until Chrome 87, due to release sometime in November.