Carrying Security Intent from the Database to GraphQL

In my previous post about the TiDB-GraphQL project, I covered the idea of treating the database schema as a design surface rather than an implementation detail. Security is one area where that idea becomes concrete.

How Database Access Is Commonly Handled

Many modern applications use a shared-credential database access model, where user identity is resolved at the application boundary (with mechanisms like OIDC and JWTs). The application then connects using a single database identity, through a connection pool, to the database.

In this model authorization is enforced in application code, middleware, or policy layers. By the time a query reaches the database, the database itself typically has no awareness of the end user. All user context has already been resolved elsewhere.

This model works well. It scales, fits managed database offerings, and keeps operational complexity of the database low. The tradeoff is that the database, the store of your application’s data, becomes largely passive from a security perspective.

Earlier Experiences with Database-Enforced Security

Earlier in my career, I worked on systems where security was handled differently. Rigorous access control at the database layer was required by our customers. In this model the users identity was explicitly used to connect with the database, and permissions were enforced directly through grants and schema design. In short, the database determined what each user could access, and those rules applied consistently regardless of how the data was reached.

That exact model does not translate cleanly to modern, cloud-native systems. Per-user database connections do not scale well, and they complicate the ability to use tools like connection pools. However, the underlying idea stuck with me. The database does not have to be a passive participant in security.

Preserving Security Intent

When access rules are enforced at the database layer, they become part of the schema’s intent. Tables, views, roles, and grants together describe not just structure, but who is allowed to see what.

One of my rules of thumb is that security controls should ideally be applied at the right level, and with the right granularity. In the case of the data held in a database, ensuring that the access to the data is enforced close to the data drives makes it much easier to manage that control. Without it, similar checks have to be reimplemented in multiple places, and the database no longer reflects the full set of assumptions about data access.

For this project, I was interested in seeing how we could enable access-control that is applied at the database level, to be surfaced up to the application itself. The goal is to ensure database-level access intent is preserved without abandoning modern authentication patterns, shared connection pools, or compromising the user experience?

Applying This in TiDB-GraphQL

TiDB-GraphQL supports two models for managing data access.

First, a shared-credential database access model can be used out of the box. This is a familiar pattern, and is easy to get up and running with.

The second approach is using TiDB’s Role Based Access Control to manage the access to the database. To deliver this, it integrates with modern identity mechanisms (like OIDC and JWTs), and it continues to rely on pooled database connections. What changes is how authenticated identity is carried from the application to the database.

With the RBAC integrated model, authorized users are mapped to database roles, and all the queries and mutations execute within that role context by switching roles on pooled connections. This means the database’s existing RBAC model is used to authorize data-level access, while the application remains responsible for authentication.

In practice, this means:

  • Identity is handled using standard authentication mechanisms
  • Database connections remain pooled and shared
  • Authorization is enforced using database roles
  • GraphQL reflects what the database permits, rather than redefining those rules again

A High-Level Architecture

At a high level, the flow looks like this:

  1. A user authenticates (using OIDC or a similar mechanism)
  2. TiDB-GraphQL validates the bearer token and loads claim data
  3. TiDB-GraphQL obtains DB connection from the DB pool and switches the connection to that role with SET ROLE
  4. Resolvers execute SQL under that role. TiDB enforces table/column access.

In this second model, the database enforces access directly, and the API surfaces the results. You can read more about this approach in the TiDB-GraphQL project’s authentication architecture doc page.

Some Tradeoffs

This approach introduces its own constraints. Role management requires care. Schema design and RBAC need to be treated as first-class concerns. Some authorization logic moves closer to the data layer, which may be unfamiliar for teams used to handling everything in application code.

For many applications, traditional a shared-credential approach will remain the suitable choice. However, for systems where data-level security matters, and where the database already encodes meaningful access boundaries, this approach offers an interesting alternative.

Introducing TiDB-GraphQL: A Database-First Approach to GraphQL APIs

My first exposure to GraphQL was quite a few years ago during my time at Rubrik. Initially, it was something I explored during hackathons, consuming them as a way to try out new UX ideas over the existing GraphQL APIs. Over time, GraphQL became more relevant to my day-to-day work, particularly as Rubrik began building integrations with third-party security products that needed to consume our APIs, which were exposed using GraphQL.

It was certainly a contrast to what I had worked with previously, which was mostly REST-style APIs. What stood out to me was not just the flexibility of GraphQL, but the way a schema could act as a shared point of understanding. Clients could discover what data was available, how it was structured, and how different parts of the system related to each other, without relying heavily on documentation or prior knowledge. You can see similar ideas reflected in SQL through mechanisms like INFORMATION_SCHEMA, which allow the structure of a database to be discovered directly.

Around the same time, I also came across some of the work Simon Willison was publishing on the GraphQL plugin for Datasette. Datasette is a tool for publishing and exploring SQLite databases, and its GraphQL support makes it possible to query a relational schema directly through a GraphQL API. It treated the database schema as something intentional and worth surfacing, rather than something to hide behind a bespoke API layer.

From Observability to an API Experiment

More recently, I have been working on observability requirements for TiDB. As part of that work, I wanted a simple way to generate end to end OpenTelemetry traces, from the application through to SQL execution. As I was thinking about this, those earlier ideas around GraphQL and Datasette resurfaced. Exposing a GraphQL interface from a database-centric perspective felt like an interesting problem to explore, particularly in the context of TiDB.

That exploration became the starting point for this project.

Why TiDB?

TiDB is a distributed SQL database that combines horizontal scalability with a traditional relational model, without requiring application-level sharding. In my current stint in Product Management at PingCAP (the company behind TiDB) I have been focused a lot on the core database engine, and how that engine fits into our customers broader data platform approaches.

TiDB is commonly used in environments where an elastically scalable, reliable, and secure transactional database is needed. With TiDB Cloud offering a generous free tier, it also felt like a practical platform for this kind of exploration.

Why Start at the Database?

I think it is fair to say that the GraphQL-way encourages a client-first approach. You start with the needs of the client, design a schema to support those needs, and then implement resolvers that fetch data from databases or services. This approach can work well in many situations and is well proven in practice.

I was interested in exploring a different approach. From my perspective, a well-designed relational model already encodes relationships, constraints, naming, and access boundaries. Those decisions are made thoughtfully, and reflect a deep understanding of the domain.

This project explores my thoughts on how an existing database structure can serve as a starting point for delivering a GraphQL API. Rather than treating the database as an implementation detail, the project uses an existing TiDB schema as the foundation and asks how much of that intent can be preserved as the data is exposed through GraphQL.

What This Project Is, and What It Is Not

This is an experiment. It is not a full-featured GraphQL platform, and it is not intended to be production-ready. The project exists primarily as a way for me to explore different data modelling ideas and learn from the tradeoffs involved.

The current implementation focuses on a small set of concerns:

  • GraphQL schema generation via database introspection
  • Sensible transformation defaults, with an emphasis on convention over configuration
  • Minimal configuration and predictable results

The project assumes that the underlying database schema has been designed with care. It does not attempt to compensate for poor modeling choices, and it does not try to cover every possible GraphQL use case.

Instead, the project provides a way to explore how a database-first approach feels in practice, what trade-offs look like, and where it works well or starts to show limitations.

If that sounds interesting, you can find the TiDB-GraphQL project on GitHub, and sign-up for your own TiDB Cloud service.

Scheduled Tasks – YARA Post #8

Today I’ll share just a short snippet that I used to look for some specific scheduled tasks on a Windows system. Luckily windows creates XML files that are located somewhere like the C:\Windows\System32\Tasks folder. These files contain an XML representation of the scheduled tasks, and it is this that I am scanning with YARA.

Here’s a quick example of the rule:

// Detects the scheduled task on a Windows machine that runs Ransim
rule RansimTaskDetect : RansomwareScheduledTask {
    meta:
        author = "Ben Meadowcroft (@BenMeadowcroft)"
    strings:
        // Microsoft XML Task files are UTF-16 encoded so using wide strings
        $ransim = "ransim.ps1 -mode encrypt</Arguments>" ascii wide nocase
        $task1  = "<Task" ascii wide
        $task2  = "xmlns=\"http://schemas.microsoft.com/windows/2004/02/mit/task\">" ascii wide

    condition:
        all of them
}

The scheduled tasks runs a short PowerShell script that simulates some basic ransomware behavior, and this rule just matches the XML file for that task. This file is encoded in UTF-16, so the $task1 and $task2 strings simply reference some strings with the wide that are a part of the common strings found within the XML file (the start of the <Task element, and the XML namespace used to define the schema), the ascii wide modifiers searches for the string in both ascii and wide (double byte) form. The remaining string just looks for the invocation of the script as an argument to the task, and ignores the case used.

If I was looking for the presence of a task on live systems then I of course have other tools I could use, such as schtasks query. However, as I am often operating on the backups of a system being able to use this file based approach can be very helpful as it doesn’t rely on the availability of the primary system when I want to identify whether a scheduled task was present at some historical point in time.

Analyzing ZIP (OOXML) Files with YARA (part 3) – YARA Post #7

My prior posts about examining ZIP archives have covered matching the file names within a ZIP archive, as well as matching the pre-compression CRC values of the files within the archive. In this blog I am going to reference an interesting example of parsing the OOXML format used by modern Microsoft Office products. This office format essentially is a ZIP archive that contains certain files within it (describing the office document).

Aaron Stephens at Mandiant wrote a blog called “Detecting Embedded Content in OOXML Documents“. In that blog Aaron shared a few different techniques used to detect and cluster Microsoft Office documents. One of these examples was detecting a specific PNG file embedded within documents, the image was using to guide the user towards enabling macros. The presence of the image in this phishing doc could be used to indicate a clustering of these attacks.

Given the image files CRC, size, and that it was a png file, the author was able to create a YARA rule that would match if this image file was located within the OOXML document (essentially a ZIP archive). This rule approached the ZIP file a little differently than we have in my prior couple of blogs. The author skips looking for the ZIP file entry and references the CRC ($crc) and uncompressed file size ($ufs) hex strings directly to narrow down the match. They also checked if the file name field entry ended with the ".png" extension.

rule png_397ba1d0601558dfe34cd5aafaedd18e {
    meta:
        author = "Aaron Stephens <[email protected]>"
        description = "PNG in OOXML document."

    strings:
        $crc = {f8158b40}
        $ext = ".png"
        $ufs = {b42c0000}

    condition:
        $ufs at @crc[1] + 8 and $ext at @crc[1] + uint16(@crc[1] + 12) + 16 - 4
}

In this example the condition is using the @crc[1] as the base from which the offsets are calculated, unlike our prior examples where the offsets were based from the start of the local file header. The use of the at operator tests for the presence of the other strings at a specific offset (to the CRC value in this case).

An alternative approach to consider is using the wildcard character ? in the hex string, this allows us to match on the CRC and uncompressed file size fields together while skipping over the 4 bytes used to store the compressed file size field. Then validating that the four letter .png extension is at the end of the file name field.

rule png_alt {
    strings:
        $crc_ufs = {f8158b40 ???????? b42c0000}
        $ext = ".png"

    condition:
        $ext at @crc_ufs[1] + uint16(@crc_ufs[1] + 12) + 16 - 4
}

Analyzing ZIP Files with YARA (part 2) – YARA Post #6

In my first exploration of analyzing ZIP files with YARA I covered how we could create a YARA rule that matched on specific file names within the ZIP archive, in this post we’ll cover a few other fields that may be of interest.

One interesting example is looking for encrypted ZIP archives, here’s the Twitter post from Tyler McLellan that I read that showed how to do this with YARA:

Check if ZIP Archive is Encrypted with YARA – from @tylabs

This snippet is first checks if the file starts with a local file header record, uint16be(0) == 0x504B, it then tests whether the first bit in the “general purpose bit flag” is set by performing a bitwise and against the flag value (to get the value of the first bit) and seeing if that is set uint16(6) & 0x1 == 0x1. This first bit indicates whether the file is encrypted or not:

4.4.4 general purpose bit flag: (2 bytes)

Bit 0: If set, indicates that the file is encrypted.

section 4.4.4 of the .ZIP File Format Specification

Another interesting field in the ZIP Archive’s header is the CRC32 of the file. This is stored at offset 14 and is 4 bytes long. If you are looking for specific files then this can help narrow it down. It should be noted that the CRC is not a cryptographic calculation, but as it is stored in the header it is very simple to check for.

rule matchFileCRC {
   strings:
        $zip_header = {50 4B 03 04}
        $crc1 = {D8 39 6B 82}
        $crc2 = {B1 81 05 76}
        $crc3 = {C2 4A 02 94}
   condition:
        // iterate over local file zip headers
        for any i in (1..#zip_header):
        (
            // match any of the CRC32 values
            for any of ($crc*):
            (
                $ at @zip_header[i]+14
            )
        )
}

The line for any of ($crc*) checks all of the defined CRC values and the line $ at @zip_header[i]+14 causes the rule to be matched if any of the CRCs we are looking for are included in the ZIP header starting at offset 14.

Learning from Others – YARA Post #5

Taking a quick break from my Zip Archive adventures, one thing I’d be remiss not to mention is the community sharing that happens around YARA. As well as the specific YARA rules that people share, there are also a lot of insights into how to use YARA, how to craft or generate rules, and lots of other creative uses of the tool.

One example of this is the activity around #100DaysofYARA on Twitter last year that was kicked off by Greg Lesnewich. Looking through many of the tweets mentioning this Hashtag will certainly show some interesting possibilities in using YARA. I’d recommend following that hashtag on Twitter and Mastodon, seeing what comes up on January 1st 2023, and sharing your own experiments!

Reading ZIP (JAR) Files with YARA (part 1) – YARA Post #4

ZIP files have a well defined structure, this makes it possible to use YARA to match certain characteristics of files stored within the ZIP file. For example file name information is stored in both local file headers and in the central directory file headers within the Archive. Wikipedia has a decent write-up on the ZIP file format structure.

I first started experimenting with this to examine the contents of Java archives (JAR files, that are essentially ZIP archives) for specific files (in this case bundled log4j jars). To do this I first defined the marker for the local file header $zip_header = {50 4B 03 04} in the strings section, followed by the files I was interested in locating in the zip. See rule below:

rule vuln_log4j_jar_name_injar : log4j_vulnerable {
    strings:
        $zip_header = {50 4B 03 04}
        $a00 = "log4j-core-2.0-alpha1.jar"
        $a01 = "log4j-core-2.0-alpha2.jar"
        // …
        $a41 = "log4j-core-2.14.0.jar"
        $a42 = "log4j-core-2.14.1.jar"
            
    condition:
        // iterate over local file zip headers
        for any i in (1..#zip_header):
        (
            // match any of the file names
            for any of ($a*):
            (
                $ in (@zip_header[i]+30..@zip_header[i]+30+uint16(@zip_header[i]+26))
            )
        )
}

In the condition section we introduced some new capabilities. We iterate over the string matches for the $zip_header string. The variable #zip_header (note the #) gives us the count of the matches, for any i in (1..#zip_header):(…) iterates over the matches (populating i for each match), while the @zip_header[i] syntax (note the @) lets us reference the offset in the file for each match.

From the ZIP format specification we know that the file name is at offset 30 from the start of the local file header. The length of the file name is stored in the two bytes at offset 26. Given this information we can read the length of the file name using uint16(@zip_header[i]+26), and then read the file name beginning at offset 30 through to the offset specified in the file name length field and compare this to the file names we are looking for referenced by $a*.

In part 2 I’ll dig into some other interesting things we can look for in the zip headers.