At Agentnoon we build org chart software for large enterprises. Our customers range from 5,000 to 240,000 employees. This is what I learned scaling the frontend over ~2 years.

Where we started

Mid-2022, I integrated d3-org-chart as the foundation. It handled layout, expand/collapse, zooming. We mounted small Vue instances as custom elements inside each node. It was a necessary workaround step since d3 only supported string templates as the individual elements, so tiny custom html elements were our approach necessary for this implementation.

This worked fine for a few thousand employees. At 20k it got sluggish. At 240k the browser just died.

The problem

For every org chart node we were:
– creating a custom element via document.createElement
– mounting a fresh Vue instance inside it
– setting up reactive dependency tracking
– rendering the component in its own scope, disconnected from the parent app

With 10k visible nodes:
1 min 45 sec to expand the full chart
5.5 GB memory
– browser frozen

The d3-org-chart library was 2,000+ lines of code we’d heavily patched. Drag & drop libraries couldn’t understand the shadow DOM scope. Every new feature was duct tape on top of duct tape.

Rewrite: 2,000 lines to 200

I was implementing drag & drop for position planning and hit a wall — libraries couldn’t work within d3-org-chart’s shadow DOM. I had three options: ignore it, write a custom implementation, or rewrite the chart as a native Vue component.

I went with option 3. Had a hunch it would be quick for this case — no expand/collapse needed, I knew how to extract d3’s tree layout calculations, and we only needed root + one level of children.

It took 4-5 hours. The result was ~200 lines. A template iterating over data, setting transform attributes on SVG nodes and lines. No custom elements, no shadow DOM, no separate Vue instances.

The key: let d3 calculate the layout, let Vue render the DOM. The old approach used d3 for both, then hacked Vue on top.

// d3 calculates positions
const layout = d3.tree().size([width, height])
const root = layout(d3.hierarchy(data))

// Vue renders
// <g v-for="node in root.descendants()" :transform="`translate(${node.x},${node.y})`">
//   <OrgChartCard :person="node.data" />
// </g>
Metric d3-org-chart Native Vue
Time 1 min 45 sec < 5 seconds
Memory 5.5 GB 1.1 GB
Code 2,000+ lines ~200 lines

Lazy loading custom fields

Next bottleneck: custom fields. User-defined attributes stored in PostgreSQL. For a 30k org with 20 types, that’s 600k rows. We loaded all of them on startup.

What I found:
– most features didn’t need ALL values
– filters only needed unique values per type — one SQL query, sub-second even for 600k rows
– org chart cards only needed values for visible nodes

So I built two backend endpoints (unique values + filtered employees) and used TanStack Query to fetch values per person as they enter the viewport. Cache policy handles the rest.

Metric Before After
Directory memory (30k org) 400-500 MB 200-300 MB
Full app path 1,400 MB 900 MB

Don’t load all people

The big one. We were fetching the entire people collection on app load. For 240k employees that meant decompressing and parsing everything before anything could render.

I moved JSON decompression to a Web Worker first (kept the UI responsive but didn’t reduce memory). Then rethought the whole approach:

The frontend doesn’t need all people data. It needs the tree structure and data for visible nodes.

I used d3 hierarchy functions on the backend to build the full tree from {id, parentId} pairs. For 240k employees this takes ~1 second server-side. The backend returns:
– lightweight hierarchy pairs for the entire org
– full records only for requested nodes (batched by what’s on screen)

Frontend renders the tree immediately from hierarchy, shows placeholder cards. As nodes enter the viewport, it batches requests for their data. TanStack Query handles caching and background refresh.

This meant rewriting basically every feature that previously iterated over people. The changelog for this release was the longest I’d ever written.

Metric Before After
Memory on load (240k) 1.8 GB 0.8 GB
Filter apply 9 sec 1.5 sec
Filter reset 15 sec 2.5 sec
30k org total memory 1+ GB < 400 MB

Scenarios migration

The main org chart was fast. But scenario planning still loaded all people upfront.

Migrating scenarios was the hardest part. Every operation had assumptions about in-memory data. Move employee — previously instant (mutate local state), now needs API call + cache invalidation. Remove employee — 4-5 different code paths depending on whether they’re a backfill, have subordinates, etc.

Trickiest bug: cache race condition where moving an employee invalidates people cache, hierarchy endpoint fires immediately, cache is half-rebuilt, returns single employee.

I used TanStack Query cache manipulation for optimistic updates on simple operations, loading states for complex ones.

What I took away

  • Each phase was the minimum change for the next order of magnitude. Dictionary lookups for 10k. New rendering for 30k. Backend hierarchy for 240k. If I’d built the final architecture in 2022 I would have over-engineered for problems that didn’t exist yet.
  • The bottleneck was never what I expected. I assumed DOM rendering. It was actually: unnecessary Vue instance creation, array iteration patterns (.includes() inside loops), synchronous JSON decompression, loading data we didn’t need.
  • Backend compute is cheap, memory is not. Hierarchy building + filtering in Node.js: 1-2 seconds for 240k records. Saved gigabytes of browser memory.
  • Data-driven scope cutting. I skipped reimplementing directory search — usage data showed 3 uses in a month. Same for column sorting.
  • Performance improvements compound. Vue rewrite enabled viewport rendering. Viewport rendering enabled on-demand fetching. On-demand fetching enabled TanStack Query caching. By the end, 30k orgs felt like cheating.

Summary

What changed Before After
Org chart library 2,000+ lines ~200 lines
10k org render 1:45 / 5.5 GB 5 sec / 1.1 GB
240k org load browser crash 0.8 GB, usable
Filter operations (240k) 9 sec 1.5 sec
Directory memory (30k) 500 MB 200 MB

At the end of 2023, opening a 30k org in dev would crash the browser in 5 seconds. Six months later I could keep it open all day.

Leave a Reply

Your email address will not be published. Required fields are marked *