IT Brief Australia - Technology news for CIOs & IT decision-makers
Story image

Exclusive interview: Adobe on enterprise markets and AI

Fri, 3rd Nov 2017
FYI, this story is more than a year old

Adobe unveiled a lot of interesting things at its annual MAX event in Las Vegas recently, and its demonstrated just how far the company has come since its famed move to the cloud.

It was an extension on the experimentation and innovation that has seen Adobe become leaders in a lot of markets, in some cases without close competition.

We sat down Michael Stoddart, Adobe Asia-Pacific director of digital media to talk about the announcements, how the company is faring in the enterprise space as well as the exciting new Sensei platform innovations.

Are there any MAX announcements that you're particularly excited about?

I'm interested in project Europa, which is just the tying together of our two clouds. Our customers have been asking us to do that for quite some time, and I'm excited that we've been able to work towards that, and it shows the value of design to a business.

Often when I'm talking to C-level executives there can be this perception that we just do design or creative stuff, but you have to look at some fundamental things that tie in with that. If you think about whether you're ever in a position where your site is broken or doesn't work, it can often just be a case of bad design, and they often never use that term.

We have a lot of thought leadership around design thinking and UX or UI design and then design implementation and through that design itself. Over the years that hasn't - at the C-level - been afforded a lot of status, because they were too worried about things like getting their SAP backend in or focusing on other core IT business functions.

In the past when we've talked to customers and we've said we want to talk to you about marketing or creativity, there was this segmentation between 'let's talk about marketing' and on the creative side, we'd get pointed towards a bunch of guys working on Macs down on the second floor. Now we're talking about both sides at one level.

What kind of ideological hurdles exist when talking to executives?

There are definitely challenges with the perception that some businesses have when they're so engrossed in the business as it is that they don't have time to change it. The really great C-level executives are the ones that know they have to digitally transform, which goes beyond just having a clean website.

You need to address things like mobile connectivity, UX, touchpoint optimisation, how customers are coming in, and whether they're dropping out and things like that.

Do you see any of the new '1.0' releases as particularly useful from an enterprise perspective?

Definitely our XD product. XD is going to be the default design environment looking forward. On our side, it's a different code-base and its so fast, they literally don't add a feature that slows it down, unless the benefits outweigh the speed reduction.

When I saw them demo a particular thing at the keynote at MAX, I thought that it must have been faked because I've never seen it happen that fast, but when I went back to my hotel room and tried it for myself,  it was as fast as they did it on stage.

The other side of XD is that it helps facilitate digital transformation agility. In a business, your heavy lifters are your coders and you don't get access to them until you've got things like a job spec, a cost-benefit analysis, an ROI and project budgets. If you don't have any of that, they won't be interested.

On a marketing side, this conversation can be eased because XD allows you show marketers how things are going to work, without too much time and resources being spent on a demonstration. It allows you to show marketers exactly how an app will look, feel and work, so they can make decisions based on that.

On Spark side, do you see the addition of custom branding capabilities opening the platform up to more SMB or enterprise use cases? Was it previously hamstrung a little bit by this?

It wasn't so much hamstrung, but you're right, it had our logo over it. Back in the days when we bought Macromedia, that stuff always had "made by Macromedia" watermarks on projects, because there's the brand awareness side of things that was a priority back then. We don't really need that anymore.

It wasn't so much hamstrung, it just enables us to provide a tickable box and it opens up the platform to users who might not have been able to use it before. It's also made easier because users have a wide variety of templates that are legally approved as well as brand and marketing approved.

By having all of your content branded specifically for your company, you cover all of the business risks. It's a highly effective way to get stuff out as well. When we do Spark content we know that people will read and engage with it.

Switching gears to talk about Sensei, why are you guys pushing AI and Machine learning so much?

One of the things that we say about Sensei is that it's creativity at scale to drive personalisation for better outcomes. Sensei manifests in a couple of different ways, one is the algorithms and machine learning that works behind the scenes in our apps like Photoshop. People forget that face aware liquify requires something working behind it to be able to tell that it's a face, and so Sensei works on that level as well.

The other side of it is, Sensei is in no way being built to replace anything. With designers at the moment, there's this content velocity problem where they can spend 10% of their time designing and 90% producing.  No one want's to be doing a job that can be done automatically. There's a big thing about artificial intelligence is going to replace humans, but we don't believe that this will happen.

Taking a piece of creative work and generating 10 different versions for something is tedious and extreme, and doing that with 150 versions is obviously worse. Sensei is designed to preserve creativity, and get rid of the drudgery repetitive work, so you can push things out to larger audiences much faster.

We also saw a more overt representation with the upcoming Sensei integration with photoshop as sort of virtual assistant. Which representation is going to see more development going forward?

It'll probably be both. Sensei basically uses our anonymous data gathering to make best recommendations or best guesses. That will be the machinery of Sensei, just making things better and easier.

The sort of creative expression that we're going to need to deliver in the next 2 or 3 years is huge. If you take AR and VR and even things like Facebook 360 videos, you're not going to be able to expand things like that without AI and machine learning, because there's just too much data and content involved.

The most immediate application of Sensei is deep etching hair. People have asked in Photoshop for a long time for that, because Photoshop hasn't been the greatest at selecting objects in the past. Sensei knows what an object is, it knows what hair is. When hair is split over a blue background, Photoshop has trouble recognising it because of the colour recognition isn't effectively able to discern the hair as a distinct object.

Sensei now changes that, and it's essentially accomplished the holy grail of Photoshop edits (deep-etching hair), and it's different because it doesn't just recognise colour, it simply knows what hair is through machine learning, which is extraordinary.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X