Tech News Feed
Save Up to 47% on Brita Water Pitchers, Filters and Bottles - CNET
Overwatch 2 players say that frame rate drops are making the game 'unplayable' on PS5
Overwatch 2's eighth season went live on Tuesday and things aren't exactly going smoothly for everyone. Some are complaining about performance issues, particularly concerning frame rates on PlayStation 5. On Blizzard's own forums and Reddit, players are suggesting that even the menus are lagging on the console.
"I play on PS5 with 120 Hz monitor and settings for that output, but randomly either in [fights] or walking back from spawn, even on menus, I am dropping down to what seems like single digit to low double digit frames per second," a player who goes by Sartell wrote. Others claim that Overwatch 2 is "unplayable" on PS5 at the minute, with some claiming that frame rates are dropping to below 20 fps. The problem doesn't seem to be as prevalent on other platforms.
I encountered the same issues in a brief test on PS5. It took a few seconds for my character to complete a full rotation, which can practically be a kiss of death in such a fast-paced shooter. It was almost like playing GoldenEye 007 at 12 fps all over again.
In the current list of known issues, which was last updated on Tuesday, Blizzard notes that "We are investigating reports of performance issues for some platforms." Engadget has asked Blizzard when a fix might be ready.
The performance issues are a pity in general, but even more so given that new tank character Mauga is a blast to play. As such, PS5 players may need wait for a hotfix before they can properly check out the latest hero, unless they're content with enjoying the action as though it were a colorful slideshow. Otherwise, downloading the PS4 version of the game could work in a pinch.
This article originally appeared on Engadget at https://www.engadget.com/overwatch-2-players-say-that-frame-rate-drops-are-making-the-game-unplayable-on-ps5-210043798.html?src=rssGoogle's Pixel Phones Are Getting a Bunch of New Features - CNET
Remote Collaboration Fuses Fewer Breakthrough Ideas
Read more of this story at Slashdot.
Save 47% on Fossil Gen 6 Wellness Edition Smart Watch: Now Just $159 - CNET
Don't Miss This Extended Holiday Deal of Up to 20% Off Le Creuset - CNET
Bitwig Studio update brings tons of new sound design options
The digital audio workstation (DAW) Bitwig Studio just received a substantial update that brings plenty of new sounds and effects. Version 5.1 boasts a spate of enhancements, including new waveshapers, new filters, polyphonic voice stacking, a dual oscillator and more. This is especially good news for avid sound designers, as the filters and waveshapers should allow for plenty of tinkering to find that perfect tone.
The filters are all rather unique, going a step further than a simple lowpass or something. For instance, the Fizz filter offers two separate cutoffs with embedded feedback. The Rasp filter is bright and resonant with a host of adjustment options. Vowels is a morphing format filter with an array of models, pitch and frequency offsets that can be programmed to change over time. Finally, there’s Ripple, which is described as a “hyper-resonant circuit.”
There are six new waveshapers to choose from, including the Push soft clipper and Heat S-shaped clipper. Both of these could be great for adding a bit of sizzle to dry tracks. Soar is a soft wavefolder that “makes the quietest parts loud” and Howl does something similar, but with a focus on creating harsh, glitchy sounds. Shred helps get rid of unwanted artifacts and Diode is a classic circuit, which Bitwig calls “a warm, familiar option.”
All filters and waveshapers can be used within the DAW’s Filter+ and Sweep devices, though they are also available as standalone Grid modules. That’s the magic of Bitwig Studio and what sets it apart from other DAWs. Everything is modular, with mix-and-match options for every effect, filter, oscillator and waveshaper.
As for other tools, there’s a new Voice Stacking module that offers layered playback of up to 16 voices per note and a dual oscillator called Bite. Bitwig has also added experimental elements to the quantizing function, which should make for some wild remixes, and adjusted the UI so the mixer can be dragged and dropped anywhere. These changes follow Bitwig Studio 5.0, which offered many new audio playback tools.
Bitwig Studio 5.1 is out now, and it's a free upgrade for license holders with an active plan. The company constantly adds new features to the DAW, as recent-ish updates saw tools to mangle MIDI performances and the addition of a hybrid modular synth.
The DAW is also on sale at the moment. You can get Bitwig Studio for $299, down from its usual $399 price. The bare-bones Essential version of the software, meanwhile, is $79 at the moment instead of $99.
This article originally appeared on Engadget at https://www.engadget.com/bitwig-studio-update-brings-tons-of-new-sound-design-options-201512397.html?src=rssAMD's Ryzen 8040 chips remind Intel it's falling behind in AI PCs
Last January, AMD beat out Intel by launching its Ryzen 7040 chips, the first x86 processors to integrate a neural processing unit (NPU) for AI workloads. Intel's long-delayed Core Ultra "Meteor Lake" chips, its first to integrate an NPU, are set to arrive on December 14th. But it seems AMD can't help but remind Intel it's lagging behind: Today, AMD is announcing the Ryzen 8040 series chips, its next batch of AI-equipped laptop hardware, and it's also giving us a peak into its future AI roadmap.
The Ryzen 8040 chips, spearheaded by the 8-core Ryzen 9 8945HS, are up to 1.4 times faster than its predecessors when its comes to Llama 2 and AI vision model performance, according to AMD. They're also reportedly up to 1.8 times faster than Intel's high-end 13900H chip when it comes to gaming, and 1.4 times faster for content creation. Of course, the real test will be comparing them to Intel's new Core Ultra chips, which weren't available for AMD to benchmark.
AMD's NPU will be available on all of the Ryzen 8040 chips except for the two low-end models, the six-core Ryzen 5 8540U and the quad-core Ryzen 3 8440U. The company says the Ryzen 7040's NPU, AMD XDNA, is capable of reaching 10 TOPS (tera operations per second), while the 8040's NPU can hit 16 TOPS. Looking further into 2024, AMD also teased its next NPU architecture, codenamed "Strix Point," which will offer "more than 3x generative AI NPU performance." Basically, don't expect AMD to slow down its AI ambitions anytime soon.
It's worth remembering that both AMD and Intel are lagging behind Qualcomm when it comes to bringing NPUs to Windows PCs. Its SQ3 powered the ill-fated Surface Pro 9 5G. That was just a minor win for the Snapdragon maker, though: the Windows-on-Arm experience is still a mess, especially when it comes to running older apps that require x86 emulation.
The far more compelling competitor for Intel and AMD is Apple, which has been integrating Neural Engines in its hardware since the A11 Bionic debuted in 2017, and has made them a core component in the Apple Silicon chips for Macs. Apple's Neural Engine speeds up AI tasks, just like AMD and Intel's NPUs, and it helps tackle things like Face ID and photo processing. On PCs, NPUs enable features like Windows 11's Studio Effects in video chats, which can blur your background or help maintain eye contact.
Just like Intel, AMD is also pushing developers to build NPU features into their apps. Today, it's also unveiling the Ryzen AI Software platform, which will allow developers to take pre-trained AI models and optimize them to run on Ryzen AI hardware. AMD's platform will also help those models run on Intel's NPUs, similar to how Intel's AI development tools will ultimately help Ryzen systems. We're still in the early days of seeing how devs will take advantage of NPUs, but hopefully AMD and Intel's competitive streak will help deliver genuinely helpful AI-powered apps soon.
This article originally appeared on Engadget at https://www.engadget.com/amds-ryzen-8040-chips-remind-intel-its-falling-behind-in-ai-pcs-200043544.html?src=rssAcer's Nitro V16 gaming laptop is powered by new AMD Ryzen 8040 processors
Acer just announced a new gaming laptop, the Nitro V 16. This computer has some serious bells and whistles, with the key takeaway being the inclusion of the just-announced AMD Ryzen 8040 Series processor. The processor has plenty of oomph for modern gaming applications, with the addition of AI technology to enable enhanced ray-traced visuals.
You can spec out the laptop how you see fit, with GPU options up to the respectable NVIDIA GeForce RTX 4060. This GPU features DLSS 3.5 tech and its own AI-powered ray-tracing, called Ray Reconstruction. You have your pick of two display options, with availability of WQXGA or WUXGA screens. Both options boast 165 Hz refresh rates and 3ms response times. Acer promises that the displays offer “fluid visuals with minimal ghosting and screen tearing.”
As for other specs, you can beef up the laptop with up to 32GB of DRR55600 RAM and 2TB of PCIe Gen 4 SSD storage. Acer also touts a new cooling system that features a pair of high-powered fans that make it “well-equipped to take on heavy gameplay.” To that end, you can monitor performance and temperature via the company’s proprietary NitroSense utility app.
There are three microphones outfitted with AI-enhanced noise reduction tech, for online tomfoolery, and the speakers incorporate DTS:X Ultra sound optimization algorithms for immersive audio. Finally, you get a USB-4 Type C port, two USB 3 ports, an HDMI port, a microSD card reader and WiFi 6E compatibility.
If the name of the processor seems a bit confusing, that's because AMD recently changed up its naming conventions. Here's a simple breakdown. The "8" relates to 2024 and the second number refers to the product line or relevant market segment, so that doesn't mean much to consumers. The third number, however, is all about performance. The "4" indicates that the chip uses the advanced Zen 4 architecture. Finally, the fourth number illustrates what type of Zen 3 architecture the chip uses. The "0" denotes a lower-tier Zen 3 experience when compared to Zen 3+, which would be marked as "5".
The Windows 11 gaming laptop will be available in March, with a starting price of $1,000 for the base model. It also comes with one month of Xbox Game Pass, so you can run it through its paces.
This article originally appeared on Engadget at https://www.engadget.com/acers-nitro-v16-gaming-laptop-is-powered-by-new-amd-ryzen-8040-processors-200031118.html?src=rssAMD Ryzen Ups Its AI Game With Ryzen 8040 Series Mobile CPUs - CNET
Apple Readies New iPads and M3 MacBook Air To Combat Sales Slump
Read more of this story at Slashdot.
How to use Personal Voice on iPhone with iOS 17
Ahead of the International Day of Persons with Disabilities last Sunday, Apple released a short film that showcased its Personal Voice accessibility feature, which debuted earlier this year in iOS 17. Personal Voice allows users to create digital versions of their voice to use on calls, supported apps and Apple’s own Live Speech tool.
For those who are at risk of permanently losing their voice due to conditions like Parkinson’s disease, multiple sclerosis, ALS and vocal cord paralysis, not sounding like yourself can be yet another form of identity loss. Being able to create a copy of your voice while you’re still able might help alleviate the feeling that you’ll never feel like yourself again, or that your loved ones won’t know what you sound like.
All iOS 17, iPadOS 17 and macOS Sonoma users can create a personal voice in case you need it in the future — whether temporarily or for long-term use. I found the process (on my iPhone 14 Pro) pretty straightforward and was surprisingly satisfied with my voice. Here’s how you can set up your own Personal Voice, as long as you’ve upgraded to iOS 17, iPadOS 17 or macOS Sonoma (on Macs with Apple Silicon).
Before you start the process, make sure you have a window of about 30 minutes. You’ll be asked to record 150 sentences, and depending on how quickly you speak, it could take some time. You should also find a quiet place with minimal background sound and get comfortable. It’s also worth having a cup of water nearby and making sure your phone has at least 30 percent of battery.
How to set up Personal Voice on iPhoneWhen you’re ready, go to the Personal Voice menu by opening Settings and finding Accessibility > Personal Voice (under Speech). Select Create A Personal Voice, and Apple will give you a summary of what to expect. Hit Continue, and you’ll see instructions like “Find a quiet place” and “Take your time.”
Importantly, one of the tips is to “Speak naturally.” Apple encourages users to “read aloud at a consistent volume, as if you’re having a conversation.” After you tap Continue on this page, there is one final step where your phone uses its microphone to analyze the level of background noise, before you can finally start reading prompts.
The layout for the recording process is fairly intuitive. Hit the big red record button at the bottom, and read out the words in the middle of the page. Below the record button, you can choose from “Continuous Recording” or “Stop at each phrase.”
In the latter mode, you’ll have to tap a button each time you’ve recorded a phrase, while Continuous is a more hands-free experience that relies on the phone to know when you’re done talking. For those with speech impairments or who read slowly, the continuous mode could feel too stressful. Though it happened just once for me, the fact that the iPhone tried to skip ahead to the next phrase before I was ready was enough for me to feel like I needed to be quick with my reactions.
Personal Voice on iOS 17: First impressionsStill, for the most part the system was accurate at recognizing when I was done talking, and offered enough of a pause that I could tap the redo button before moving to the next sentence. The prompts mostly consisted of historical and geographical information, with the occasional expressive exclamation thrown in. There’s a fairly diverse selection of phrases, ranging from simple questions like “Can you ask them if they’re using that chair?” to forceful statements like “Come back inside right now!” or “Ouch! That is really hot!”
I found myself trying to be more exaggerated when reading those particular sentences, since I didn’t want my resulting personal voice to be too robotic. But it was exactly when I was doing that when I realized the problem inherent to the process. No matter how well I performed or acted, there would always be an element of artifice in the recordings. Even when I did my best to pretend like something was really hot and hurt me, it still wasn’t a genuine reaction. And there’s definitely a difference between how I sound when narrating sentences and having a chat with my friends.
That’s not a ding on Apple or Personal Voice, but simply an observation to say that there is a limit to how well my verbal self can be replicated. When you’re done with all 150 sentences, Apple explains that the process “may need to complete overnight.” It recommends that you charge and lock your iPhone, and your Personal Voice “will be generated only while iPhone is charging and locked” and that you’ll be alerted when it’s ready to use. It’s worth noting that in this time, Apple is training neural networks fully on the device to generate text-to-speech models and not in the cloud.
In my testing, after 20 minutes of putting down my iPhone 14 Pro, only 4 percent of progress was made. Twenty more minutes later, the Personal Voice was only 6 percent done. So this is definitely something you’ll need to allocate hours, if not a whole night, for. If you’re not ready to abandon your device for that long, you can still use your phone — just know that it will delay the process.
When your Personal Voice is ready, you’ll get a notification and can then head to settings to try it out. On the same page where you started the creation process, you’ll see options to share your voice across devices, as well as to allow apps to request to use it. The former stores a copy of your voice in iCloud for use in your other devices. Your data will be end-to-end encrypted in the transfer, and the recordings you made will only be stored on the phone you used to create it, but you can export your clips in case you want to keep a copy elsewhere.
How to listen to and use Personal VoiceYou can name your Personal Voice and create another if you prefer (you can generate up to three). To listen to the voice you’ve created, go back to the Speech part of the accessibility settings, and select Live Speech. Turn it on, choose your new creation under Voices and triple click your power button. Type something into the box and hit Send. You can decide if you like what you hear and whether you need to make a new Personal Voice.
At first, I didn’t think mine sounded expressive enough, when I tried things like “How is the weather today?” But after a few days, I started entering phrases like “Terrence is a monster” and it definitely felt a little more like me. Still robotic, but it felt like there was just enough Cherlynn in the voice that my manager would know it was me calling him names.
With concerns around deepfakes and AI-generated content at an all-time high this year, perhaps a bit of artifice in a computer-generated voice isn’t such a bad thing. I certainly wouldn’t want someone to grab my phone and record my digital voice saying things I would never utter in real life. Finding a way to give people a sense of self and improve accessibility while working with all the limits and caveats that currently exist around identity and technology is a delicate balance, and one that I’m heartened to see Apple at least attempt with Personal Voice.
This article originally appeared on Engadget at https://www.engadget.com/how-to-use-personal-voice-on-iphone-with-ios-17-193002021.html?src=rssApple’s latest tvOS beta kills the iTunes Movies and TV shows apps
Apple’s latest tvOS beta suggests the iTunes Movies and TV Shows apps on Apple TV are on their way out. 9to5Mac reports the set-top box’s former home of streaming purchases and rentals is no longer in the tvOS 17.2 release candidate (RC), now available to developers. (Unless Apple finds unexpected bugs, RC firmware usually ends up identical to the public version.) Apple’s folding of the iTunes apps into the TV app was first reported in October.
9to5Mac says the home screen icons for iTunes Movies and iTunes TV Shows are still present in the tvOS 17.2 firmware, but they point to the TV app, where the old functionality will live. The publication posted a photo of a redirect screen, which reads, “iTunes Movies and Your Purchases Have Moved. You can buy or rent movies and find your purchases in the Apple TV App.” Below it are options to “Go to the Store” or “Go to Your Purchases.”
The change doesn’t remove any core functionality since the TV app replicates the iTunes Movies and TV Shows apps’ ability to buy, rent and manage purchases. The move is likely about streamlining — shedding the last remnants of the aging iTunes brand — while perhaps nudging more users into Apple TV+ subscriptions.
The update also adds a few features to the TV app on Apple’s set-top box. These include the ability to filter by genre in the purchased section, the availability of box sets in store listings and a new sidebar design for easier navigation.
Apple has increasingly invested in video content as it relies more on its services division for growth. Martin Scorsese’s Killers of the Flower Moon debuted in theaters in October, earning critical acclaim and awards-season buzz for the months ahead. (It already became the first streamer to win a Best Picture Oscar in 2022.) Scorsese’s film is currently available to rent or buy in the TV app, and it’s scheduled to land on Apple TV+ “at a later date.” Apple’s high-profile original series include Ted Lasso, Severance, The Morning Show, Foundation and Silo, among others.
This article originally appeared on Engadget at https://www.engadget.com/apples-latest-tvos-beta-kills-the-itunes-movies-and-tv-shows-apps-192056618.html?src=rssMillions of Coders Are Now Using AI Assistants. How Will That Change Software?
Read more of this story at Slashdot.
Researchers develop under-the-skin implant to treat Type 1 diabetes
Scientists have developed a new implantable device that has the potential to change the way Type 1 diabetics receive insulin. The thread-like implant, or SHEATH (Subcutaneous Host-Enabled Alginate THread), is installed in a two-step process that ultimately leads to the deployment of “islet devices,” which are derived from the cells that produce insulin in our bodies naturally.
First, the scientists figured out a way to insert nylon catheters under the skin, where they remain for up to six weeks. After insertion, blood vessels form around the catheters which structurally support the islet devices that are placed in the space when the catheter gets removed. The newly implanted 10-centimeter-long islet devices secrete insulin via islet cells that form around it, while also receiving nutrients and oxygen from blood vessels to stay alive.
The implantation technique was designed and tested by researchers at Cornell and the University of Alberta. Cornell’s Minglin Ma, a Professor of Biological and Environmental Engineering, created the first implantable polymer in 2017 dubbed TRAFFIC (Thread-Reinforced Alginate Fiber For Islets enCapsulation), which was designed to sit in a patient’s abdomen. In 2021, Ma’s team developed an even more robust implantable device that proved it could control blood sugar levels in mice for six months at a time.
The current problem with SHEATH is its long-term application in patients. “It’s very difficult to keep these islets functional for a long time inside of the body… because the device blocks the blood vessels, but the native islet cells in the body are known to be in direct contact with vessels that provide nutrients and oxygen,” Ma said. Because the islet devices eventually need to be removed, the researchers are still working on ways to maximize the exchange of nutrients and oxygen in large-animal models — and eventually patients. But the implant could one day replace the current standard treatment for Type 1 diabetes, which requires either daily injections or insulin pumps.
This article originally appeared on Engadget at https://www.engadget.com/researchers-develop-under-the-skin-implant-to-treat-type-1-diabetes-191005726.html?src=rssBest Credit Cards With No Foreign Transaction Fees for December 2023 - CNET
Meta's AI image generator is available as a standalone website
Meta has launched a standalone version of its image generator as it tests dozens of new generative AI features across Facebook, Instagram and WhatsApp. The image generator, called Imagine, was first previewed at the company’s Connect event in November and has been available as part of Meta’s AI chatbot.
Now, with its own dedicated website at imagine.meta.com, the tool will be available outside of the company’s messaging apps. Like other generative AI tools, Imagine allows users to create images from simple text prompts. Imagine, which relies on Meta’s Emu model, will generate four images for each prompt.
The images all have a visible watermark in the lower left corner indicating they were created with Meta AI. Additionally, Meta says it will soon begin testing an invisible watermarking system that’s “resilient to common image manipulations like cropping, color change (brightness, contrast, etc.), screen shots and more.” For those interacting with the image generator in Meta’s messaging apps, the company also introduced a new “reimagine” tool, which allows users to tweak existing images created with Meta AI in chats with friends.
Interestingly, the standalone site for Imagine requires not just a Facebook or Instagram login, but a Meta account, which was introduced earlier this year so VR users could use Quest headsets without a Facebook login. It’s unclear for now if Meta planning an eventual virtual reality tie-in for Imagine, but the company has recently used its new generative AI tools try to breathe new life into its metaverse.
Meta is also testing dozens of new generative AI features across its apps. On Instagram, the company is testing the ability to convert a landscape image to portrait in Stories with a new creative tool called “Expander.” On Facebook, generative AI will also start to show up in places like Groups and Marketplace. Meta is also testing AI-generated writing suggestions for Feed posts, Facebook Dating profiles as well as AI-generated replies for creators to use in replies to Instagram direct messages.
With the latest changes, Meta is also making its 28 celebrity-infused chatbots available to all users in the United States. The company says it will test a new “long-term memory” feature for some of its AI characters so that users can more easily return to previous chats and pick up the conversation where they left off. The chatbots are available in Instagram, Messenger and WhatsApp.
The updates highlight how Meta has sought to make generative AI a core part of its service as it tries to compete with the offerings of other AI companies. Mark Zuckerberg said earlier this year that the company would bring gen AI into “every single one of our products.”
This article originally appeared on Engadget at https://www.engadget.com/metas-ai-image-generator-is-available-as-a-standalone-website-185953058.html?src=rssUbisoft's Rocksmith+ guitar-learning app now teaches piano
Ubisoft’s Rocksmith+ guitar-learning platform just got an update that’s sure to please ivory ticklers, as the app now teaches piano. A single subscription allows access to every instrument under Rocksmith’s umbrella, including acoustic guitar, electric guitar, electric bass and, now, piano.
The newly-updated Rocksmith+ already boasts 400 piano arrangements to practice, with at least 40 more arriving each month. These songs include pop hits like Elton John’s “Rocket Man”, Adele’s “Make You Feel My Love” and titles culled from a diverse array of genres, including classical to soundtracks and beyond. These piano-based compositions join over 7,000 pre-existing songs for guitar and bass players.
The app’s available for both mobile devices and PCs via the Ubisoft store, and the update lets you use a digital piano, keyboard or wired MIDI controller. It supports keybeds with 25 keys up to the full complement of 88 keys. You’ll have your choice of practice methods, as the app offers an interactive 3D interface or traditional sheet music. Also, you don’t need any extra gear to get going, like a dedicated microphone.
Reviews for the guitar and bass elements of Rocksmith+ have been mixed, with some publications praising the intuitive interface and others decrying the limited song selection. The app offers a free trial for a week, but subscriptions cost $15 per month, if you go with a monthly plan, or $100 per year. The free trial is only available for the yearly subscription, so exercise caution when signing up and be sure to set a reminder to cancel before the week is up if you aren’t jiving with the software.
This article originally appeared on Engadget at https://www.engadget.com/ubisofts-rocksmith-guitar-learning-app-now-teaches-piano-184530282.html?src=rssLegal Manga App User Banned After Taking 'Fraudulent Screenshots'
Read more of this story at Slashdot.
Honda will reveal a new EV series at CES 2024
Honda is planning to make a bigger push into the EV market as part of its goal of introducing 30 electric models by 2030. We’ll soon get a look at a new EV series from the automaker, as it’s preparing to show off the lineup for the first time at CES 2024. We’ll get our first glimpse of these EVs during a press conference on January 9. The event starts at 1:30PM ET and you’ll be able to watch it below.
The automaker hasn’t revealed many more details about the new EV series. However, it did note that it will detail "several key technologies that illustrate the significant transformation Honda is currently undergoing." Honda is aiming to only sell zero-emission vehicles in North America by 2040, including battery electric and fuel cell electric powered models.
As it stands, the company has quite a bit of work ahead to hit that goal and perhaps catch up with its more EV-focused rivals. Currently, Honda has the all-electric Prologue SUV (which isn't available to the public yet) and two hybrids in its electrified lineup. In fact, it has killed off multiple hybrid models over the last few years..
This article originally appeared on Engadget at https://www.engadget.com/honda-will-reveal-a-new-ev-series-at-ces-2024-182012834.html?src=rss