|A server room — but not Canon's|
According to a report from BleepingComputer, Canon has been it by a ransomware attack that’s resulted in more than 10TB (yes, terabytes) of data being taken from Canon servers.
In a detailed report, BleepingComputer says known ransomware group ‘Maze’ has taken credit for the attack, which has affected nearly every facet of the company, both internal and consumer-facing. BleepingComputer also reports Canon’s IT department has sent out a company-wide message that reads:
'Message from IT Service Center
Attention: Canon USA is experiencing widesrpead system issues, affecting multiple applications, Teams, Email and other systems may not be available at this time. We apologize for the inconvenience — a status update will be provided as soon as possible.'
|A number of the below domains from Canon show this error when attempting to visit.|
At this time, the following domains of Canon are being affected:
BleepingComputer also shared a partial screenshot it claims is ‘the alleged Canon ransom note.’ Maze, the ransomware operators claiming to be behind the attack, says it stole 10TB of data, private databases and more, but failed to provide any information on how much of a ransom it’s asking and proof of what was taken.
The recent issues with Canon’s cloud-based media platform, image.canon, are unrelated to this ransomware attack, according to Maze.
BleepingComputer describes Maze as ‘an enterprise-targeting human-operated ransomware that compromises and stealthily spreads laterally through a network until it gains access to an administrator account and the system’s Windows domain controller.’ Maze is behind ransomware attacks on numerous other enterprises, such as LG, Xerox and more.
We contacted Canon for more information on the matter, to which Canon’s PR team replied ‘We are currently investigating the situation.’
Today, Samsung unveiled its latest flagship Note devices, the Galaxy Note 20 and Galaxy Note 20 Ultra, and teased a successor to its original Galaxy Fold. The new devices bring improved specifications across the board, including an upgraded three-module camera system.
While similar in name, the two devices appeal to different user bases, with the Galaxy Note 20 being the more entry-level phone, while the Galaxy Note 20 Ultra goes big across the board with a massive screen, tons of RAM and an impressive camera setup.
As their names suggest, the Galaxy Note 20 is the more basic of the pair, but it still has plenty to offer. The phone is built around a massive 6.7” AMOLED display with a 20:9 aspect ratio and a 60Hz refresh rate. It’s constructed of Samsung’s ‘Glasstic’ plastic, uses Gorilla Glass 5 for the front display and is IP68 certified.
In North America, the device will use Qualcomm’s Snapdragon 865 Plus CPU and the global version will run on Samsung’s Exynos 990 CPU. Inside, it has 128GB of internal storage (no microSD card slot), 8GB of LPDDR5 RAM, 5G, Wi-Fi 6 and a 4,300mAh battery that can charge at 25W wired and up to 15W wirelessly.
Moving onto the camera setup, the Galaxy Note 20 offers three camera modules on the back: a 12MP F1.8 optically-stabilized wide-angle camera with 1.8μm pixels, a 64MP F2.0 telephoto (hybrid 3x zoom) camera with 0.8μm pixels and a 12MP F2.2 wide-angle camera with 1.4μm pixels. The front of the devices uses a 10MP F2.2 camera module with 1.22μm pixels. As for video capture, the Galaxy Note 20 can record 8K video at up to 24fps in either 16:9 or 21:9.
The Galaxy Note 20 comes in ‘Mystic Gray,’ ‘Mystic Green’ and Mystic Bronze, and will start at $1,000.
The Galaxy Note 20 Ultra takes the ‘bigger is better’ approach in nearly every way. The phone has a 6.9” AMOLED 120Hz display that wraps around the edge of the phone to form a 19.3:9 ratio. It’s constructed of metal and glass, with a Gorilla Glass 7 display cover, and is IP68 certified.
|A front-view comparison between the Note 20 and Note 20 Ultra.|
The CPU options for the Galaxy Note 20 Ultra are the same as the Note 20: a Qualcomm Snapdragon 865 Plus in North America and a Samsung Exynos 990 CPU for the global version. Inside is 12GB of LPDDR5 RAM, a 4,500mAh battery that can charge at 25W wired and up to 15W wirelessly, 5G, Wi-Fi 6 and the option for either 128GB or 512GB of internal storage (no microSD card slot).
For photography, the Galaxy Note 20 Ultra has the same 108MP F1.8 wide-angle optically-stabilized camera module found in the Galaxy S20 Ultra, a 12MP F3.0 telephoto (5x zoom) camera module and a 12MP F2.2 ultra-wide cameras, as well as a Laser AF module for improved autofocus speed and accuracy. It can record 8K video at up to 24fps in either 16:9 or 21:9, identical to its non-Ultra counterpart.
The Galaxy Note 20 Ultra comes in ‘Mystic Black,’ Mystic White’ and Mystic Bronze, and will start at $1,300. Pre-orders for both phones start at 12:01 AM August 6th, with the first units expected to ship on August 21.
In addition to the two Note devices, Samsung also showed off the Galaxy Z Fold 2, a successor to its original Galaxy Fold phone release last year. Samsung didn’t divulge too many specifications for the new folding phone, but does appear to use the same camera setup found on the Galaxy note 20, based on the product images. Samsung says it will reveal more information on the Galaxy Z Fold 2 on September 1.
The saying goes, ‘better late than never,’ but Fujifilm might be pushing the boundaries of the phrase with its new promotional video.
It’s been eight years since Fujifilm released its XF 35mm F1.4 R lens, but a new promo video showcasing the features of the lens has popped up on the Fujifilm X Series YouTube channel. Understandably, this new promo video has left some Fujifilm users confused and even disappointed, as an upgraded version of this lens is high on the request list of many Fujifilm users.
Unfortunately, this new four-minute video, which showcases numerous Fujifilm X-Photographers talking about the lens, isn’t a teaser for a new, upgraded ‘Mark II’ version or anything of the sort. It’s simply a self-described ‘ode’ to ‘one of the original X mount lenses [that] has captured countless numbers of precious moments over the years.’
The timing is curious, as is ‘The Original’ nomenclature, but the video links directly to a landing page for the XF 35mm F1.4 R, suggesting Fujifilm is still working hard to promote one of the first XF lenses.
The Sony a9 II is built for speed and power, but that doesn't mean everything you point it at has to be moving fast. In fact, its 24MP full-frame sensor, 20 fps burst shooting (with autofocus) and 693 phase-detect AF points make it very well equipped to handle virtually any shooting scenario. Especially those with slow-moving, meandering cattle.
During its Adobe MAX 2019 event, Adobe announced its Content Authenticity Initiative (CAI), the first mission of which is to develop a new standard for content attribution. 'We will provide a layer of robust, tamper-evident attribution and history data built upon XMP, Schema.org and other metadata standards that goes far beyond common uses today,' the company explains in a new white paper about the initiative.
The idea behind Adobe's CAI is that there's no single, simple, and permanent way to attach attribution data to an image, making it hard for viewers to see who owns the image and the context surrounding its subject matter. This paves the way for image theft, as well as the spread of misinformation and disinformation, a growing problem on the modern Internet.
Adobe's new industry standard for digital content attribution, which was announced in collaboration with Twitter and The New York Times, will potentially change this, adding a level of trust in content that may otherwise be modified or presented with an inauthentic context on social media and elsewhere.
Adobe said in November 2019 that it had a technical team:
...exploring a high-level framework architecture based on our vision of attribution, and we are inviting input and feedback from industry partners to help shape the final solution. The goal of the Initiative is for each member to bring its deep technical and business knowledge to the solution. Success will mean building a growing ecosystem of members who are contributing to a long-term solution, adoption of the framework and supporting consumers to understand who and what to trust.
The newly published white paper titled 'The Content Authenticity Initiative: Setting the Standard for Digital Content Attribution' explains how this new digital content attribution system will work.
The team cites a number of 'guiding principles' in the initiative, including the ability for their specifications to fit in with existing workflows, interoperability for 'various types of target users,' respect for 'common privacy concerns,' an avoidance of unreasonable 'technical complexity and cost' and more. Adobe expects a variety of users will utilize its content attribution system, including content creators, publishers and consumers, the latter of which may include lawyers, fact-checkers and law enforcement.
The team provides examples of the potential uses for its authenticity system in various professions. For photojournalists, for example, the workflow may include capturing content at a press event using a 'CAI-enabled capture device,' then importing the files into a photo editing application that has 'CAI functionality enabled.'
Having preserved those details during editing, the photojournalist can then pass on the images to their editor, triggering a series of content verifications and distribution to publications, social media managers and social platforms, all of which will, ideally, support displaying not only the CAI information but also any alterations made to the content (cropping, compression, etc).
The idea is that at all times during its distribution across the Internet, anyone will be able to view the details about the image's origination, including who created it, what publication originally published the image, when the photo was captured, what modifications may have been made to the image and more.
The white paper goes on to detail other potential creation-to-distribution pipelines for creative professionals and human rights activists.
What about the system itself? The researchers explain that:
The proposed system is based on a simple structure for storing and accessing cryptographically verifiable metadata created by an entity we refer to as an actor. An actor can be a human or non-human (hardware or software) that is participating in the CAI ecosystem. For example: a camera (capture device), image editing software, or the person using such tools.
The CAI embraces existing standards. A core philosophy is to enable rapid, wide adoption by creating only the minimum required novel technology and relying on prior, proven techniques wherever possible. This includes standards for encoding, hashing, signing, compression and metadata.
Each process during the creator's workflow, such as capturing the image and then editing, produce 'assertions' as part of the CAI system. Typically speaking, according to the white paper, these assertions are JSON-based data structures that reference declarations made by the actor, which can refer to both humans and machines, including hardware like cameras and software like Photoshop.
The researchers go on to explain that:
Assertions are cryptographically hashed and their hashes are gathered together into a claim. A claim is a digitally signed data structure that represents a set of assertions along with one or more cryptographic hashes on the data of an asset. The signature ensures the integrity of the claim and makes the system tamper-evident. A claim can be either directly or indirectly embedded into an asset as it moves through the life of the asset.
For every lifecycle milestone for the image, such as when it was created, published, etc., the authenticity system will create a new set of assertions and claim related to it, with each claim daisy-chaining off the previous claim to create something like a digital paper trail for the work.
Of course, there are potential issues with Adobe's vision for content authentication, the most obvious being whether the industry is willing to adopt this system as a new standard. The CAI digital content attribution system will only succeed if major hardware and software companies implement the standard into their products. Beyond that, social media platforms would need to join the effort to ensure these permanent attribution and modification details are accessible to users.
As well, Adobe's system will have to achieve its highest goal, which is to be tamper-proof, something that is yet to be demonstrated. Work under this initiative is still underway; interested consumers can find all of the technical details in the white paper linked above.