Journal of Questionable Software Engineering • Vol. 1, Issue 1 • November 2025

Flyx: An Empirical Study in Stealing from Thieves While Maintaining Moral Superiority

A comprehensive analysis of building ethical streaming infrastructure by reverse engineering the security measures of criminals who profit from content they do not own, featuring extensive documentation of late-night debugging sessions and an alarming amount of coffee consumption.

V
VynxIndependent Researcher & Professional Insomniac
Received: June 2025Revised: October 2025Accepted: November 2025Reading Time: ~20 minutes

Abstract

This paper presents Flyx, a fully-functional video streaming platform developed over five months to test a hypothesis that many would consider obvious but few have bothered to prove: that free streaming services do not actually require malicious advertising, invasive tracking, cryptocurrency miners, or user interfaces designed by someone who genuinely hates humanity. The pirate streaming ecosystem has long operated under the assumption that exploitation is the price of free content. We challenge this assumption by building something that works without being terrible.

Through systematic reverse engineering of third-party streaming providers—entities that themselves profit from content they do not own—we demonstrate that it is entirely possible to extract and serve video content without subjecting users to the digital equivalent of walking through a minefield blindfolded. Our findings suggest that the prevalence of exploitative practices in pirate streaming reflects a choice to prioritize profit over basic human decency, not a technical or economic necessity.

Keywords: Streaming Architecture, Reverse Engineering, Ethical Design, Obfuscation Analysis, JavaScript Archaeology, Sleep Deprivation, Coffee Dependency

1. Introduction

1.1 The State of Free Streaming (A Horror Story)

The year is 2025. Humanity has achieved remarkable technological feats. We have sent robots to Mars. We have developed artificial intelligence that can write poetry and generate images of cats wearing business suits. And yet, if you want to watch a movie for free on the internet, you must first navigate an obstacle course of pop-up advertisements, fake download buttons, cryptocurrency miners, and user interfaces that appear to have been designed by a committee of people who have never actually used a computer.

This is not hyperbole. This is Tuesday.

The pirate streaming ecosystem represents one of the most hostile environments on the modern web. Users seeking free access to movies and television are routinely subjected to an arsenal of exploitative practices that would make a used car salesman blush. Pop-up advertisements spawn endlessly, like some sort of digital hydra. Fake "close" buttons trigger additional advertisements, because apparently the first seventeen were not enough. Cryptocurrency miners run silently in the background, turning your laptop into a space heater while generating approximately $0.003 worth of Monero for someone in a country you cannot pronounce.

Browser fingerprinting tracks you across the web with the persistence of an ex who "just wants to talk." Malware distribution disguises itself as video players, codec updates, and occasionally as messages from Nigerian princes who have inexplicably developed an interest in streaming technology. Dark patterns trick users into clicking things they did not intend to click, visiting places they did not intend to visit, and questioning life choices they thought they had already resolved in therapy.

1.2 The Implicit Assumption

Underlying this entire ecosystem is an assumption so pervasive that most people have stopped questioning it: free content requires exploitation. If you are not paying with money, you must pay with your security, your privacy, your CPU cycles, and your sanity. This is presented as an immutable law of the universe, like gravity or the tendency of software projects to exceed their estimated completion dates by a factor of three.

We reject this assumption.

Not because we are naive idealists who believe in the inherent goodness of humanity—we have spent too much time reading YouTube comments for that—but because we suspected it was simply not true. The exploitation is not a necessary evil. It is a choice. A profitable choice, certainly, but a choice nonetheless.

1.3 The Hypothesis

This project began with a simple question: what if someone built a streaming platform that was not actively hostile to its users? What if, instead of treating visitors as resources to be extracted, we treated them as people who just wanted to watch a movie without their browser catching fire?

"The best way to prove something is possible is to do it. The second best way is to write a really long document about doing it and hope people believe you."— Ancient Proverb (Source: We Made It Up)

We chose the first option, then wrote the document anyway because we are overachievers with poor time management skills.

1.4 Scope and Contributions

This paper makes the following contributions to the field of "things that should have been obvious but apparently needed proving":

  • Proof of Concept: A fully functional streaming platform that operates without advertisements, tracking, malware, or contempt for its users.
  • Reverse Engineering Documentation: Comprehensive analysis of the obfuscation and security measures employed by pirate streaming providers, including detailed accounts of the author's descent into madness while debugging minified JavaScript at 3 AM.
  • Architectural Reference: A blueprint for building privacy-respecting streaming applications that other developers can use, ignore, or print out and use as kindling, depending on their preferences.
  • Economic Analysis: Evidence that the "we need aggressive monetization to survive" argument is, to use the technical term, complete nonsense.

2. Literature Review

2.1 The Exploitation Economy

Academic research into pirate streaming sites has documented what users have known for years: these platforms are terrible. Rafique et al. (2016) found that over 50% of visitors to major pirate streaming sites were served malware through advertisements. This is not a bug; it is the business model. The advertising networks that work with these sites have content policies best described as "whatever pays."

Konoth et al. (2018) documented the rise of in-browser cryptocurrency mining, a practice that combines the excitement of watching your CPU usage spike to 100% with the financial reward of generating approximately nothing for yourself while making someone else slightly less poor. The authors noted that users often had no idea this was happening, which is either a testament to the subtlety of the implementation or the general state of computer literacy in the modern era.

Laperdrix et al. (2020) provided a comprehensive survey of browser fingerprinting techniques, demonstrating that even users who clear their cookies and use private browsing can be tracked with alarming accuracy. The paper reads like a horror novel for anyone who thought "incognito mode" actually meant something.

2.2 The Dark Patterns Epidemic

Gray et al. (2018) coined the term "dark patterns" to describe user interface designs that trick users into doing things they did not intend. Pirate streaming sites have elevated this to an art form. Fake close buttons, hidden redirects, misleading download links, and countdown timers that reset when you are not looking—these are not accidents. They are features.

Mathur et al. (2019) conducted a large-scale analysis of dark patterns across 11,000 shopping websites and found them everywhere. We did not conduct a similar analysis of pirate streaming sites because we value our mental health, but anecdotal evidence suggests the situation is significantly worse. At least shopping sites occasionally want you to buy something. Pirate streaming sites just want to watch the world burn.

2.3 The "Necessary Evil" Myth

Defenders of exploitative practices often argue that they are economically necessary. "Servers cost money," they say, as if this explains why clicking a play button should open seventeen browser tabs and install a toolbar nobody asked for.

This argument deserves scrutiny, primarily because it is wrong.

Pirate streaming sites do not host content. They aggregate it. They are glorified link directories with embedded players that point to streams hosted elsewhere. The actual bandwidth costs are borne by third-party providers. The site operators need to pay for domain registration, basic hosting, and perhaps a modest amount of server-side processing. Modern serverless platforms offer free tiers that can handle substantial traffic without cost.

The exploitation is not necessary. It is simply more profitable than the alternative. Site operators choose to deploy malware, mine cryptocurrency, and track users because these practices generate revenue, not because the sites could not function without them.

2.4 Privacy-Respecting Alternatives in Other Domains

The broader web has seen growing interest in privacy-respecting alternatives to surveillance-based services. DuckDuckGo has demonstrated that search can work without tracking. Signal has proven that messaging can be secure without being unusable. ProtonMail has shown that email can be private without requiring a PhD in cryptography to set up.

Yet the streaming space has seen limited progress in this direction. This is partly due to technical complexity—streaming is harder than search—and partly due to legal ambiguity surrounding content aggregation. But mostly, we suspect, it is because the people capable of building something better were busy with legitimate projects, while the people running pirate sites were too busy counting their malware revenue to care about user experience.

3. Methodology

3.1 Research Design

This study employs what academics call "constructive research methodology" and what normal people call "building the thing and seeing if it works." The primary research artifact—the Flyx streaming platform—serves as both the subject of investigation and the vehicle for generating insights. It also serves as evidence that the author has too much free time and questionable priorities.

The research proceeded through four distinct phases, each characterized by its own unique blend of optimism, despair, and caffeine dependency:

01

Requirements Analysis

Feature prioritization, technology evaluation, and the gradual realization that this project was going to be significantly more complicated than initially anticipated.

Weeks 1-3
02

Core Development

Building the platform, reverse engineering stream providers, and developing an intimate familiarity with the JavaScript debugger that borders on unhealthy.

Weeks 4-16
03

Deployment & Optimization

Going live, discovering that everything works differently in production, and fixing bugs that somehow did not exist five minutes ago.

Weeks 17-19
04

Documentation

Writing this paper, which took longer than expected because academic writing is hard and we kept getting distracted by the platform we built.

Weeks 20-22

3.2 Development Constraints

To ensure the validity of our findings regarding solo development feasibility—and also because we did not have a choice—the following constraints were observed throughout the project:

👤

Single Developer

All code, design, and documentation produced by one individual. No contractors, collaborators, or rubber ducks that provided unusually good advice.

💸

Zero Budget

Only free tiers of services utilized. If a service wanted a credit card, we found an alternative or learned to live without it.

🌙

Part-Time Effort

Development conducted during evenings and weekends, averaging 15-20 hours per week over five months. Sleep was occasionally sacrificed.

📚

Public Resources Only

All learning materials publicly available. No proprietary training, insider knowledge, or deals with supernatural entities.

3.3 Ethical Considerations

Before proceeding, we established a set of non-negotiable ethical principles. The platform would have:

  • Zero advertisements of any kind, not even "tasteful" ones
  • Zero tracking cookies or cross-site identifiers
  • Zero cryptocurrency mining, even the "opt-in" kind that nobody opts into
  • Zero pop-ups, pop-unders, or pop-sideways
  • Zero fake buttons, misleading links, or dark patterns
  • Zero collection of personally identifiable information
  • Zero selling of user data to third parties, fourth parties, or any other parties

If we could not build the platform without violating these principles, we would not build it at all. Fortunately, as this paper demonstrates, we could.

4. System Architecture

4.1 Architectural Philosophy

The Flyx architecture is guided by a simple principle: minimize complexity, maximize reliability, and never, under any circumstances, require the developer to wake up at 3 AM because a server crashed. This led us to embrace serverless computing with the enthusiasm of someone who has been personally victimized by server maintenance.

The system follows what we call the "Not My Problem" architectural pattern, wherein as many operational concerns as possible are delegated to managed services that are someone else's problem. Scaling? Vercel's problem. Database availability? Neon's problem. SSL certificates? Also someone else's problem. Our problem is writing code that works, which is frankly enough problems for one person.

4.2 Technology Stack

Each technology in the stack was selected through rigorous evaluation against our primary criteria: "Will this make my life easier or harder?" Technologies that made life harder were rejected, regardless of how impressive they looked on a resume.

Next.js 14

The backbone of the application. Server-side rendering for SEO, API routes for the proxy layer, and a developer experience that does not make us want to throw our laptop out the window. The App Router took some getting used to, but we got there eventually.

TypeScript

Type safety was non-negotiable for a project of this complexity. TypeScript caught countless bugs at compile time that would have otherwise manifested as mysterious production errors at the worst possible moment.

Vercel

Hosting, edge functions, and a global CDN, all on a free tier generous enough to handle our traffic without requiring us to sell organs. The deployment experience is so smooth it feels like cheating.

Neon PostgreSQL

Serverless PostgreSQL that scales to zero when not in use, which is perfect for a project with unpredictable traffic patterns and a budget of exactly zero dollars.

HLS.js

The industry standard for adaptive bitrate streaming in browsers. Handles manifest parsing, quality switching, and buffer management so we could focus on not making the user interface terrible.

4.3 The Proxy Layer

The proxy layer is where the magic happens, and by "magic" we mean "a significant amount of header manipulation and referrer spoofing that makes streams actually play."

Stream providers implement various restrictions to prevent their content from being embedded on unauthorized domains. They check Referer headers, Origin headers, and occasionally perform rituals that we do not fully understand but have learned to accommodate. The proxy layer intercepts all stream requests and rewrites headers to match what the providers expect, presenting a unified interface to the client while handling the complexity behind the scenes.

5. The Heist: Reverse Engineering Stream Providers

5.1 The Irony

Here is the delicious irony at the heart of this project: the streaming providers we needed to crack are not legitimate businesses. They are pirates themselves—profiting from content they do not own by wrapping it in malware, pop-ups, and cryptocurrency miners. Our job was to break into their systems and steal what they had already stolen, then serve it without the exploitation.

We are, in essence, robbing the robbers. And we feel absolutely no guilt about it.

"These sites make millions from advertisements and malware while hiding behind layers of obfuscation that would make nation-state hackers proud. They are not protecting intellectual property—they are protecting their revenue stream from people like us who want to give users the content without the cancer."— Field Notes, 3 AM, Week 7

5.2 The Battlefield

Picture this: you find a pirate streaming site. It works. Videos play. But when you try to extract the actual stream URL to use in your own player—to strip away the pop-ups and malware—you hit a wall. Not just one wall. A fortress of walls, each more devious than the last. These criminals have invested serious engineering talent into making sure nobody can do what we were trying to do.

🔐 Challenge 1: The Code Spaghetti Monster

Open DevTools on any pirate streaming site and look at their JavaScript. It is not code—it is a war crime against readability. Variable names like _0x4a3fand _0xb7c2. Strings split into arrays of character codes, reassembled through twelve layers of function calls that reference other arrays by computed indices. Control flow that looks like someone threw spaghetti at a wall and called it architecture.

And the crown jewel: eval() statements that generate MORE obfuscated code at runtime. You cannot even read what you are trying to crack because it does not exist until the page executes.

Our Approach: We built a custom deobfuscation pipeline. Intercept everyeval() call and log what it produces. Trace string operations backwards through the call stack. Write AST-based transformers that rename variables based on usage patterns. Slowly, painfully, over many late nights, the gibberish becomes readable. Then you find the one line that matters: where they construct the stream URL.

⏱️ Challenge 2: The Ticking Clock

Found the stream URL? Congratulations. It expires in 90 seconds.

Every request to the stream server needs a fresh token computed from the current timestamp, the content ID, and a secret key buried somewhere in 50,000 lines of obfuscated JavaScript. Copy-paste the URL? Dead on arrival. You need to understand their entire authentication scheme and replicate it perfectly.

Our Approach: Hours of stepping through minified code in the debugger, watching variables change, mapping the flow of data from input to output. Eventually you find it: they are using HMAC-SHA256 with a hardcoded key hidden in what appears to be a fake jQuery plugin. Extract the key, reimplement the algorithm server-side, generate valid tokens on demand. Their 90-second window becomes irrelevant.

🤖 Challenge 3: The Bot Hunters

These sites HATE automation. They check if you are running headless Chrome by looking for missing browser APIs. They analyze your mouse movements for human-like patterns. They fingerprint your WebGL renderer, your canvas, your audio context. They measure how long it takes you to click things and flag anything that seems too fast or too consistent.

Fail any check and you get a fake stream that plays for exactly 30 seconds before cutting to black—or worse, an IP ban that requires you to restart your router and contemplate your life choices.

Our Approach: We tried the obvious solutions first. Puppeteer with stealth plugins. Fake mouse movements with Bézier curves. Randomized timing delays. None of it worked consistently. Then we had a realization: their bot detection runs client-side. If we never execute their JavaScript, we never trigger their checks. Pure HTTP requests, carefully crafted headers, surgical extraction. No browser, no detection, no problem.

🪆 Challenge 4: The Russian Nesting Dolls

Click play on a pirate streaming site. The video loads in an iframe. That iframe loads another iframe from a different domain. Which loads ANOTHER iframe from yet another domain. The actual video player might be four layers deep, each layer hosted on a different domain with different CORS policies, each performing its own validation and token verification.

It is like trying to break into a bank vault that is inside another bank vault that is inside a third bank that is on fire.

Our Approach: Map the entire chain. Follow each redirect, extract each URL, understand what each layer validates. Build a system that traverses the whole maze automatically, spoofing referrers at each hop, collecting tokens from each layer, until you reach the actual stream buried at the bottom.

5.3 War Stories

Every provider was a different puzzle. Different obfuscation, different tricks, different ways to make our lives difficult. Here are the ones that nearly broke us—and how we broke them instead.

The 2Embed Labyrinth — Three Weeks to Crack

2Embed was our white whale. A hydra of domains—streamsrcs, embedsrcs, vidsrc, and half a dozen others—each redirecting to the next, each generating new tokens, each running its own obfuscated validation. The final player used a packing algorithm we had never seen before: strings were not just encoded, they were shattered into individual characters stored in arrays, then reassembled through a maze of function calls that referenced other arrays by computed indices.

We spent two weeks just understanding how their packer worked. Filled notebooks with diagrams. Wrote a custom unpacker. Then discovered they had THREE different packing schemes that rotated based on content ID. Back to the drawing board.

The Breakthrough: 3 AM on a Tuesday, week three. We noticed the packing seed was derived from the TMDB ID in a predictable way. If we knew the content, we could predict which unpacker to use. Suddenly, extraction dropped from 5+ seconds with full browser automation to 180 milliseconds with pure HTTP requests. We may have woken up the neighbors with our celebration.

SuperEmbed's Decoy Trap — The One That Fought Back

SuperEmbed was paranoid. Canvas fingerprinting. WebGL checks. Timing analysis on every interaction. But their cruelest trick was the decoy streams. Fail their bot detection and they do not block you—they give you a stream that works perfectly for exactly 30 seconds, then dies. You think you have won. You deploy your code. Users start complaining. You realize you have been played.

We burned a week on stealth techniques. Puppeteer plugins. Fake mouse movements. Randomized timing. Nothing worked consistently. Their detection was too good.

The Breakthrough: We stopped trying to fool their JavaScript and started ignoring it entirely. Their validation happened client-side—in the browser. But the actual stream endpoint? It just needed the right parameters. We traced the network requests, found the endpoint, figured out what parameters it expected, and called it directly. No browser. No JavaScript. No bot detection. Just a clean HTTP request that returned the real stream every time.

5.4 The Numbers

15+Obfuscation schemes cracked
180msAverage extraction time
95%+First-try success rate
Coffee consumed

6. Implementation Details

6.1 The Streaming Pipeline

The streaming pipeline is the technical heart of the platform. When a user clicks play, a carefully orchestrated sequence of events unfolds:

  1. The system queries multiple stream providers in parallel, because relying on a single provider is a recipe for disappointment.
  2. Provider-specific decoders crack the obfuscation and extract playable URLs.
  3. The proxy layer handles CORS negotiation, header spoofing, and referrer manipulation.
  4. The clean stream is delivered to a custom video player that does not try to install malware or mine cryptocurrency.
  5. If the primary source fails, the system automatically falls back to alternatives without the user noticing anything except perhaps a brief loading indicator.

6.2 Analytics Without Surveillance

We wanted to understand how people use the platform without becoming the thing we were fighting against. The solution: anonymized, aggregate analytics only.

No personal information is collected. No cross-session tracking. No fingerprinting. Just anonymous session identifiers that cannot be linked to real identities, aggregate usage statistics, and content interaction data. Enough to understand what is working and what is not, without knowing who anyone is.

The analytics system uses a batched event model, accumulating events client-side and flushing them periodically to minimize network overhead. Critical events like session start and content completion are sent immediately; routine progress updates are batched. This reduces API calls by approximately 80% compared to real-time event streaming.

6.3 The Numbers

50K+Lines of code
150+React components
40+API endpoints
15+Database tables

7. Results & Analysis

7.1 Primary Findings

After five months of development, countless debugging sessions, and an amount of coffee that probably qualifies as a medical concern, we can report the following findings:

1

Exploitation Is Optional

Flyx operates without advertisements, tracking, or malware while providing functional streaming. This proves that exploitative practices on pirate sites are profit-maximizing choices, not technical or economic requirements. They could do better. They choose not to.

2

Zero-Cost Operation Is Achievable

The platform runs entirely on free tiers. Vercel handles hosting. Neon handles the database. The "we need aggressive ads to pay for servers" argument is demonstrably false for aggregator-style platforms.

3

Privacy and Functionality Coexist

Useful analytics can be collected without PII. Watch progress syncs without accounts. The platform works without knowing who you are, which is how it should be.

4

Solo Development Is Feasible

One person, working part-time, can build a production-quality streaming platform. Modern tools have lowered the barrier to entry dramatically. The excuse that "it is too hard" no longer holds.

7.2 Performance Metrics

Despite the complexity of the system, performance remains strong. Lighthouse scores consistently hit 90+ across all categories, which is better than most "legitimate" streaming services we tested for comparison.

8. Discussion

8.1 Implications

The existence of Flyx has implications that extend beyond the technical. It demonstrates that the exploitative practices endemic to pirate streaming are not inevitable—they are choices made by operators who prioritize profit over users.

This matters because it shifts the moral calculus. When exploitation was assumed to be necessary, users could rationalize accepting it as the price of free content. Now that we have demonstrated an alternative exists, that rationalization becomes harder to maintain. The operators of exploitative platforms can no longer hide behind claims of necessity. Their practices are revealed for what they are: greed.

8.2 Limitations

We would be remiss not to acknowledge the limitations of this work:

  • The platform depends on third-party stream providers that may change their obfuscation at any time, requiring ongoing maintenance.
  • The legal status of content aggregation remains ambiguous in many jurisdictions.
  • The author's coffee consumption during development may have reached levels that are not medically advisable.
  • Some features that users expect from commercial platforms (recommendations, multiple profiles, offline viewing) are not yet implemented.

8.3 The Cat-and-Mouse Reality

Reverse engineering streaming providers is an ongoing battle. Providers regularly update their obfuscation, change their API endpoints, and implement new detection mechanisms. What works today may fail tomorrow.

The system is architected with this reality in mind. Provider-specific adapters can be updated independently. Automated health checks monitor extraction success rates. When something breaks, we know about it quickly and can respond before users notice widespread failures.

It is exhausting. But it is also, in a strange way, satisfying. Every time a provider updates their protection and we crack it again, we prove that their fortress is not as impenetrable as they thought.

9. Future Work

Flyx is not done. It works, but "works" is a low bar. Here is what we want to build next, assuming we ever recover from the sleep debt accumulated during initial development:

  • Smart Recommendations: Privacy-preserving personalization that learns what you like without tracking who you are.
  • Progressive Web App: Offline capability and app-like experience without going through app stores that would definitely reject us.
  • Internationalization: Multiple languages, RTL support, regional content preferences.
  • Accessibility Improvements: Comprehensive WCAG compliance, screen reader support, keyboard navigation everywhere.
  • More Providers: Expanding the pool of stream sources to improve reliability and content coverage.

10. Conclusion

We built a streaming platform. It works. It does not assault users with pop-ups, mine cryptocurrency on their CPUs, or track them across the web. And we did it alone, part-time, with no budget, over five months of evenings and weekends.

That is the point. Not that we are special—we are not. The point is that if one person can do this under these constraints, then every pirate streaming site that serves malware is making a choice. They could treat users like humans instead of revenue sources. They choose not to because exploitation is more profitable than ethics.

"The pop-ups are not necessary. The crypto miners are not necessary. The tracking is not necessary. They are choices. And those choices tell you everything you need to know about the people making them."

To users: you deserve better. You do not have to accept malware as the price of free content. Alternatives can exist.

To developers: if you have the skills to build something, build something good. The world has enough exploitative garbage.

To the operators of pirate streaming sites: we see you. We know what you are doing. And we built this specifically to prove that you do not have to do it. Your greed is a choice, and that choice defines you.

Flyx exists because we got tired of watching the internet get worse. It is a small thing—one platform, one developer, one statement. But it is proof that better is possible. And sometimes, proof is enough.

12. References

[1] Rafique, M. Z., Van Goethem, T., Joosen, W., Huygens, C., & Nikiforakis, N. (2016). It's free for a reason: Exploring the ecosystem of free live streaming services. Network and Distributed System Security Symposium (NDSS).

[2] Konoth, R. K., Vineti, E., Moonsamy, V., Lindorfer, M., Kruegel, C., Bos, H., & Vigna, G. (2018). MineSweeper: An in-depth look into drive-by cryptocurrency mining and its defense. ACM Conference on Computer and Communications Security (CCS).

[3] Laperdrix, P., Bielova, N., Baudry, B., & Avoine, G. (2020). Browser fingerprinting: A survey. ACM Transactions on the Web, 14(2), 1-33.

[4] Gray, C. M., Kou, Y., Battles, B., Hoggatt, J., & Toombs, A. L. (2018). The dark (patterns) side of UX design. CHI Conference on Human Factors in Computing Systems, 1-14.

[5] Mathur, A., Acar, G., Friedman, M. J., Lucherini, E., Mayer, J., Chetty, M., & Narayanan, A. (2019). Dark patterns at scale: Findings from a crawl of 11K shopping websites. ACM Human-Computer Interaction, 3(CSCW), 1-32.

[6] Nikiforakis, N., Kapravelos, A., Joosen, W., Kruegel, C., Piessens, F., & Vigna, G. (2013). Cookieless monster: Exploring the ecosystem of web-based device fingerprinting. IEEE Symposium on Security and Privacy, 541-555.

[7] Englehardt, S., & Narayanan, A. (2016). Online tracking: A 1-million-site measurement and analysis. ACM Conference on Computer and Communications Security (CCS), 1388-1401.

[8] Stockhammer, T. (2011). Dynamic adaptive streaming over HTTP: standards and design principles. ACM Conference on Multimedia Systems, 133-144.