The Ultimate Guide to the New Frame.io Search Experience

The most significant evolution in Frame.io search—ever.

If you’ve worked in media for any length of time, you know that finding can be almost as challenging as creating.

Entire productions grind to a halt because files are in the wrong location, the exact quote is proving elusive, assets weren’t tagged correctly, or the customer commented that something was “off-brand,” but you cannot recall which project, which asset, let alone which version.

That’s why I’m excited to share something truly transformational:

Today, we’re launching the public beta of the new era of search with media intelligence.

Not an update. Not an iteration. A step-change—powered by a new combination of natural language processing (NLP) and semantic understanding that helps you search the way you think, not the way you file.

For creative teams swimming in growing oceans of media, this is more than a feature launch. It’s a new foundation.


Introducing the next leap forward in how you discover, surface, and use your assets

This overhaul gives customers on paid Frame.io plans faster, more intuitive discovery—across comments, transcripts, metadata, and the visual content itself.

  • Media Intelligence (NLP)
    • Natural language processing for search queries interprets everyday language so teams can find assets faster without relying on exact filenames or tags.
  • Comments Search
    • NLP-driven lexical search across the content of all comments in the account to surface feedback and assets instantly.
  • Transcription Search
    • NLP-driven lexical search across the content of all transcriptions in the account. This lets editors jump to the exact spoken moments in footage by searching transcripts with natural language queries.
  • Increased Metadata Support
    • NLP provides for searching across most system fields and file attributes—expanding discoverability by making these searchable through natural language.

And, for Teams and Enterprise plans:

  • Media Intelligence (Semantic Search)
    • Semantic search for all images and video across the account. This finds the right media instantly by recognizing visual meaning in images and video, even when filenames or tags are incomplete or missing.


Available for Frame.io desktop, iOS app, and mobile web (coming soon). Metadata fields are available to developers via the API.


The story behind the evolution

Like many of you, I learned the importance of search the hard way.

Early in my career, I found myself in a 20’ × 20’ editing room overflowing with shelves of tapes: camera masters, stock footage, library material, EDLs with scribbles, and pre-build tapes with effects assembled over multiple passes. My job was to version a three-hour special on the history of television for international distribution. There was no final digital sequence or EDL to reference—just the final North American version on tape, plus bins, notes, tape labels, and hope.

Where I needed to extend or shorten a sequence in order to hit a different duration, I had to trace every video and audio component back by hand. I’d reverse-engineer effects, correlate timecodes, decipher handwriting, and then with a clutch of candidate tapes in hand, begin fast-forwarding and eventually scrubbing frame-by-frame in search of the exact clip needed. It was part detective work, part endurance test.

That experience taught me something fundamental:
Organization and structure, while absolutely essential, don’t scale well when you need to locate assets or moments across a large pool of media.

Powerful search and great metadata are required to cut through the scale of media that many customers are managing.


Why search matters more than ever

Teams now generate more footage, more metadata, more versions, more deliverables, more transcripts, and more review comments than ever before. Cloud workflows have accelerated production—but without fast, intuitive discovery, the benefits quickly erode.

This release is our answer to that modern creative reality—by reducing creative overhead.

Media creatives should be free to focus on the work, instead of marshaling the skillsets of librarians, metadata specialists, or archivists.

The faster you find assets, the faster you, create, iterate, and deliver with a better end result.

In an industry where deadlines are measured in hours—not days—this really matters.

Where deadlines are measured in hours—not days—this really matters.

As someone who has lived in edit bays, machine rooms, and production offices for decades, I’m deeply motivated to help make creative work easier, faster, more intuitive, and more fun!

The new Frame.io search with media intelligence is built around five main pillars towards that vision

  1. Quality and accuracy. Users want the most relevant results, smartly ranked.
  2. Metadata welcome but not required. When available, the power of metadata should be available to use within search. However, adding metadata shouldn’t be a prerequisite for Frame.io to help locate what’s needed.
  3. Search should speak your language. Frame.io’s NLP is by design media and production-savvy, meaning users can phrase queries in familiar ways. NLP also helps bring multiple search dimensions together for the purpose of producing results that best match the user’s intent.
  4. Workflow-oriented. Search doesn’t exist in isolation. It’s a deeply embedded part of the overall workflow. Frame.io’s new search experience reflects that with quick actions you can apply directly to search results without leaving the search interface.
  5. AI tools should respect your content and craft. Our models are never trained on your media and never on data scraped from the web.

How you’ll receive results from your search

Let’s dig into how you’ll receive results from your search. Starting from the top with the available quick actions directly in the search interface.
From here you can:

  • Go directly to the result (asset, folder, project)
  • Download (assets or folders)
  • View in source (assets or folders)
  • Copy URL (assets or folders)
  • Open settings (projects)

When results contain matches for a comment, you can navigate directly to a specific comment using the detail panel to the right of the results list:

Navigating comment results

When results contain matches for a transcript, upon jumping to the asset, the transcript panel will be open and the query ready for click-through:

Navigation transcript results

When results contain visual/semantic matches, you’ll see white lines (subclips) on the timebar. You can navigate directly to a specific subclip using the detail panel to the right of the results list:

Navigation visual/semantic matches


The two screenshots below show the difference in results.

Both the standard and the new media intelligence results are shown from the same following query:

black and white images showing woman working out in gym

  • The standard results at the top of the search results area are those that match lexically based on name or core metadata. The top three ‘standard results’ are shown, with the View All affordance to display more.
  • The media intelligence results are those that match based on NLP, which includes a broader range of metadata, all comments, all transcriptions, and, for Team and Enterprise Accounts, semantic search too.
  • The panel on the right shows more detail for the results selected, e.g., metadata, and when relevant, subclips for semantic or transcription matches.

Let’s take a deeper look at each of these.

Standard results

Standard results are particularly helpful when you already know the name of the item they wish to locate. Standard results derive matches lexically based on the name for assets, folders, and projects. For assets, there’s also matches from metadata values for a core set of metadata fields:

  • Status
  • Keywords
  • Uploader
  • Assignee
  • File Type
  • Format
  • Source Filename

Media intelligence results

Media intelligence results are ideal if the user has a multi-intent query (e.g., matching across three different metadata fields). Media intelligence results derive matches courtesy of NLP (see below), which in turn leverages:

  • Content of comments account-wide
  • Content of all transcriptions account-wide
  • Semantic search (for Teams and Enterprise Accounts)
Additional metadata values ↓
  • System Fields:
    • Assignee
    • Comment Count
    • Date Uploaded (including date range)
    • Keywords
    • Notes
    • Rating
    • Seen By
    • Status
    • Uploader
  • File Attributes:
    • Alpha Channel (presence)
    • Audio Bit Depth
    • Audio Bit Rate
    • Audio Codec
    • Audio Sample Rate
    • Bit Depth
    • Bit Rate
    • Audio Channels
    • Color Space
    • Duration
    • Dynamic Range
    • End Time
    • File Size
    • File Type
    • Format
    • Frame Rate
    • Page Count
    • Resolution – Height
    • Resolution – Width
    • Source Filename
    • Start Time
    • Video Bit Rate
    • Video Codec
    • Visual Bit Depth
    • Transcription (presence)


Media intelligence (NLP): natural language processing that finds what you need, using words

NLP parses the query entered by the user, with reference to media-industry and technology terminology, and Frame.io’s metadata fields. It breaks down the query text into a series of parts that can be matched against names, metadata fields, comments, transcriptions, plus visual semantic closeness.

Let’s break down this query as an example:

Assets with Rating 3 or above, uploaded between January 1st and today, with File Size over 1.5GB

  • Assets” – interpreted as video, images, audio, text (vs. folder or project)
  • with Keywords" – the asset has one or more keywords (specifically an entry in the Frame.io default ”Keywords“ metadata field)
  • uploaded between January 1st and today” – asset was uploaded during the specified date range
  • File Size over 1.5GB” – asset file-size range specified

The result is a multi-intent query, from something written in a natural or intuitive way.

It’s worth taking a moment to set expectations clearly. Our updated search is in public beta. You will see real value on day one, and the experience will improve as we expand and refine the underlying models. We at Frame.io still have tuning and learning ahead of us before releasing the full production version next year. In the meantime, we encourage you to share your feedback and help improve the experience for everyone.


Semantic search: visually locate moments, emotions, subjects, and objects

Semantic search lets you search the content of images and videos based on intent and/or visual similarity. This means being able to find not only assets, but moments or elements within assets rapidly and without prior tagging being necessary.

This is the same technology that editors are already using inside Premiere—but now accessible inside a shared, cloud-native environment.

Instead of using the standard metadata fields that are either present within the asset before ingest or added afterwards within the Frame.io system, semantic relies on a new way of analyzing and indexing content. These ‘semantic embeddings’ describe and reference the content of images using floating point numbers, and the content of video as a series of floating point numbers over time.

This allows recall of these images or moments of video when, using the query text entered, Frame.io generates an embedding to match against. These searches rely on a relevancy score—so they’ll always present something based on trying to get as close as possible.

For the sake of contrast:

  • With standard results, if there is no match to the name and other metadata, then no results are shown.
  • However, searches using media intelligence with visual semantic matches will always deliver something, even if it’s not necessarily relevant. For this reason, accounts that are relatively unpopulated—or populated with a very narrow range of content—may show odd results if a query calls for something that isn’t present—or if there’s very few results to call on.

Again, I mention these things in the spirit of setting expectations, since here we will also most certainly have some optimizing and tuning to do.

Note: By default, the new natural language processing behind search automatically indexes the last 30 days of new uploads. That’s so you can start using it right away. If you need to look back further than 30 days and index older content, we can manually set that up for you with an Enterprise plan. If you’re already on Enterprise, reach out to your customer success manager, and we’ll set up historical indexing with you.

If you’re not a Frame.io Enterprise customer and want to index older content, you can connect with the sales team to explore plan options here →


Hybrid lexical + semantic search: the best of both worlds

One of the most exciting and powerful aspects of this release is the ability to search using a combination of lexical and semantic components with a single query. This hybrid capability gives you precision when you need it, flexibility when you don’t, and discovery when you weren’t even looking for it.

The outcome is that users can drive more complex queries containing several match requirements across name, metadata, comments, transcriptions, and semantic search.

For example, let’s break down this multi-intent query:

4K Sunset clips of happy people with status is approved

  • clips” – interpreted as video assets (vs. images, audio, text)
  • 4K" – interpreted as video assets with a resolution of either 4096×2160 or 3840×2160 pixels
  • Sunset” – will be processed by the semantic engine’s visual capabilities; seeking visual closeness to sunsets that the model has been trained on
  • happy people” – this matching on intention requires the model to utilize the associations it’s made across many different visual patterns that are related to the concepts of people and happiness
  • status is approved” – where a user mentions a known metadata field in their query, NLP will match against the various values for that metadata field in the account


Tips for getting better search results

Here are a few pointers to help you get the best search results out of the new Frame.io search.

Standard (lexical) search

When searching for a match on an asset, folder, or project name, try to be as complete as possible. If you are looking for an asset entitled “project x-ray,” you will have greater success with that full term vs. a part thereof, e.g.


Best results:

project x-ray

Less accurate results:

x-ray

It’s also possible to search for a match on multiple names. Enter each name with either a space or a comma:

health, medical, life

or

10001.jpg, 10002.jpg, 10003.jpg

When searching for a match on a folder name, enter the name and once the results show up use the “Folder” filter:

nature

When searching for a match on a project name, enter the name and once the results show up use the “Project” filter:

stock assets

Media intelligence (NLP) search

When searching for assets that match based on metadata values, it helps to include the metadata field name followed by the desired value, such as:

  • status approved
  • keyword nature
  • assignee samuel
  • project stock assets
  • codec pro res
  • uploaded over past 30 days
  • uploader felicia

When searching for a match against a date range, use words vs numeric shorthand:

  • uploaded during December 2025
  • uploaded between January 1st 2025 and today

When searching for a match within a transcript, call that out together with the word or phrase you’re seeking:

transcript “motivation”

or

transcript contains motivation

When searching for a match within a comment, then call that out together with the word or phrase you’re seeking:

comment “hero”

or

comment contains hero

If you wish to build a multi-intent query, then it can help to leave out words that aren’t strictly necessary.

For example, while the example below will work:

grab the PDF from the whisper fan project, it has two pages, Fabian uploaded it I think

This version, however, demonstrates a cleaner, more scalable approach:

PDF 2 pages project whisper fan uploader Fabian

You can also search for numeric ranges, such as:

  • PDF with 20 pages or more
  • Images rated at 3 stars or above
  • Video resolution is greater than 1920 x 1080 but less than 3500 x 2000
  • Assets deleted within the past 30 days

Media intelligence (semantic)

You can search for simple visual matches, such as:

  • sunsets
  • meadow
  • flowers

You can search for basic visual matches with some situational awareness, such as:

  • sunsets over the ocean
  • meadow with golden grasses
  • red flowers in a vase

You can search for emotions, such as:

  • woman smiling
  • child feels safe
  • boy is excited
  • man is fearful

You can search for concepts or mood, such as:

  • colorful, dramatically lit images of technology
  • teens or young adults dancing at a party
  • Black and white images of a woman working out in a gym

Hybrid lexical + media intelligence (NLP and semantic)

Here are some examples of combining the power of lexical and media intelligence to search for visual or conceptual matches while also specifying metadata attributes:

  • 4K ProRes clips of apex predators
  • monochrome images with notes "character jake"
  • videos of a mother with child swimming uploaded by Sean
  • martial arts competition or challenge rated at 4 stars or above
  • images and videos of people who have disabilities in sporting events
  • video or images uploaded in 2025 showing urban density or big city life, shopping and leisure

Optimizations are on the way

During the beta period, we are still iterating on search overall and tuning the ranking of results.

For example, the most recent results won’t necessarily always appear near the top of the list.

The purpose of the beta period is to collect feedback to help improve this experience for customers like you. So, if you encounter search results that seem unexpected, please provide feedback via the in-app Search feedback button and/or contact customer support.

Thank you for helping us make the best search we can!

Negation logic is not yet supported

Currently, negation logic is not operational. As such, queries like the examples below will not work as expected:

  • Find me images that don’t contain trees
  • Videos with status needs review but not if tagged with keyword jungle
  • Medical procedure but no blood

And while we already support almost all default metadata fields (system fields and file attributes), we have not yet implemented the ability to search across custom metadata fields. We’re working on it!

Built on ethical and transparent AI

Creatives want powerful AI—but they also want to trust the systems they use.

Adobe has taken a consistent stance:

  • We do not train on customers’ assets.
  • We do not train on random web content.

Our models are trained on responsibly sourced, permissioned, licensed data with full consent. This ethical foundation is one of Adobe’s biggest differentiators.

Nevertheless, if you or your organization is not yet comfortable with using AI technology, then we offer you the ability to opt out. Just contact support to let us know and your assets won’t be processed for semantic search.

You will still be able to take advantage of lexical search and NLP, but the visual semantic component won’t be present.


Ready to dig in? Here’s how to get started

Use the public beta now

The new search beta experience is available to all paid accounts using the latest version of Frame.io, Version 4.

You’ll be able to experience just how fast, intuitive, and powerful the new search feels in your own projects.

  • All paid plan customers will benefit from the introduction of NLP, extra metadata, transcriptions, and comment search.
  • Teams and Enterprise plan customers will also benefit from semantic search for all newly ingested assets

Get the most out of search with media intelligence using our dedicated support article

We’ve prepared clear examples, tips, and best practices to help you get the most out of this release in a dedicated support article →

Share feedback as you use it

Use the in-app Search feedback button to send us thoughts, ideas, and feature requests. Your input will directly help in shaping development priorities.


What’s coming next

Now that we have such a strong foundation for search, our initial focus will be:

  • Performance improvements
  • Refining ranking and relevance
  • Rapidly incorporating customer feedback
  • Tuning NLP behaviors
  • Adjusting semantic weightings across different query types

We’re already exploring new capabilities such as face recognition, auto-tagging, and other assistive tools. This is where your voice through feedback carries so much weight and value.

The start of a new era of discovery

If the last decade was about re-centering media and metadata in the cloud, the next five years are about making sense of it all at scale. This is the year Frame.io search stepped fully into that future.

We’re on a journey, and there’s more work to do. But this new foundation is a giant step forward—and I can’t wait to hear your feedback and feature requests as we remain firmly focused on helping you and your team go from mountains of files to meaningful content.

Happy searching.

PS. This is truly a team achievement. I want to extend my sincere thanks to our customers, beta participants, the Frame.io and Adobe Search & Metadata teams, and Frame.io leadership.



 Frame.io search FAQ

Does the natural language processing (NLP) have access to the account’s media assets?

No. NLP is only used to parse the text query entered by the user. It has no awareness of the content against which the final query will be run.

Will customer assets be used to train/reinforce our AI model?

No. In line with Adobe’s AI ethics principles, we do not use customers’ data to train or augment our internal model

The new feature will generate references to visual objects. What is that and where will the associated metadata be stored?

The references are sets of decimal numbers that only Adobe can decode. The references are stored in our secure internal database. No decoding takes place other than when the customer searches in their account.

Do we use the same Adobe model for semantic search as Premiere?

Yes. We use the Adobe One model, which follows Adobe’s AI ethics principles.

Note that there is currently no way to have Premiere read Frame.io embeddings or vice versa, although this would be possible if the Premiere storage was in the Adobe Cloud alongside Frame.io.

Does semantic search support finding specific people in pictures and videos using facial recognition?

No. This is functionality under consideration for the 2026 roadmap.

What is the difference between semantic search in Frame.io vs. Premiere?
  • Premiere searches on-device within the current project (or specified projects?)
  • Frame.io searches account-wide and utilizes NLP
Can I search using the Frame.io panel in Premiere?

Yes. This will search the entire Frame.io account vs. the native search in Premiere, which searches just the Premiere project on the local workstation.

Will there be a support article during the beta period?

Yes. You can find the support article here →

Which Frame.io plans benefit from the new search experience?

Teams and Enterprise plan search experience

  • Searching comments, transcriptions, and additional metadata fields
  • NLP and semantic search
  • Can opt out of semantic search
    • Opt-out is at the account level.
    • Account Admin can opt out of semantic search by contacting customer support (individual users cannot opt out)
    • Will still benefit from NLP

Pro plan search experience

  • Will benefit from searching comments, transcriptions, and additional metadata fields
  • Will benefit from NLP
  • Will NOT benefit from semantic search

Free plan search experience

  • Will benefit from searching comments, transcriptions, and additional metadata fields
  • Will NOT benefit from NLP or semantic search
Is auto-tagging supported with semantic search?

Not at this time.

What known limitations exist with this new version of Frame.io search?
  • Cannot scope a search to specific projects without typing “Project is X” in order to utilize NLP
  • No custom metadata
  • No face detection
  • Limited to 50 Standard and 50 Media Intelligence Results
  • No saved searches or convert-to-Collection
Which metadata are supported with standard search?
  • Name
  • Asset vs. Folder vs. Project
    • Including whether Assets or Folders were deleted in the past 30 days
  • Fields:
    • System Fields:
      • Status
      • Keywords
      • Uploader
      • Assignee
      • File Type
    • File Attributes:
      • Format
      • Source Filename
Which additional metadata are supported with NLP?
  • Comments contents
  • Date Range
  • Transcriptions contents
  • Transcription (present)
  • Fields:
    • System Fields:
      • Comment Count
      • Date Uploaded (including date range)
      • Notes
      • Rating
      • Seen By
    • File Attributes:
      • Alpha Channel (presence)
      • Audio Bit Depth
      • Audio Bit Rate
      • Audio Codec
      • Audio Sample Rate
      • Bit Depth
      • Bit Rate
      • Audio Channels
      • Color Space
      • Duration
      • Dynamic Range
      • End Time
      • File Size
      • File Type
      • Format
      • Frame Rate
      • Page Count
      • Resolution – Height
      • Resolution – Width
      • Source Filename
      • Start Time
      • Video Bit Rate
      • Video Codec
      • Visual Bit Depth
      • Transcription (presence)
Which metadata are not (yet) supported with NLP?
  • Workspace
  • Seen By (people)
  • Seen by (count)
  • Date Time Deleted
  • Version Stack (is/is contained within)
  • Comments (extras such as mentions, hashtags, reactions, read receipts)

Steve Holyhead

Steve Holyhead is a Product Strategist for Adobe Frame.io and Emmy-award-winning media industry product-and-workflow veteran. He's dedicated the past 12 months to leading the Frame.io Search and Metadata Squad as they’ve built a completely new search experience from the ground up.

Frame.io at Adobe MAX 2025: Connected Creativity for Modern Content Production 

August + September Product Releases: Welcome to the Fall of Features 

Own Your Creative Workflow: See What’s New at IBC 2025