Saturday, November 29, 2025

5 Surprising Truths About DaVinci Resolve I Wish I Knew Sooner

 


Choosing your first professional video editing software can feel overwhelming. The debate between giants like Premiere Pro, Final Cut Pro, and DaVinci Resolve is endless, with each having its own army of devoted fans. DaVinci Resolve, in particular, carries a reputation for being an incredibly powerful—but also intimidating—tool reserved for Hollywood professionals.

But what if I told you that the most compelling reasons to choose Resolve have nothing to do with its complex reputation? What if the truths that make it the smartest choice for beginners and growing professionals are often hidden in plain sight? After years of working in post-production, I've identified five counter-intuitive truths about DaVinci Resolve that reframe its value and show why it might be the only software you'll ever need.

1. The "Free" Version Isn't a Trap; It's a Genius Business Model

The first question every aspiring editor asks is, "If DaVinci Resolve is so good, why is it free?" It’s a valid skepticism. We're conditioned to believe that "free" means limited, crippled, or a temporary trial. This is not the case with Resolve.

The secret lies with its creator, Blackmagic Design. They aren't primarily a software company; they are a hardware company. They build and sell high-end cinema cameras, professional color grading panels, and audio mixing consoles that cost thousands, with high-end professional setups running into the tens of thousands of dollars.

The free version of DaVinci Resolve is a strategic masterpiece. By giving away an astonishingly powerful version of their software, they introduce millions of editors to the Blackmagic ecosystem. Their goal is simple: as you grow from a beginner into a professional, you'll already be comfortable with their workflow. When you need to upgrade your gear to work faster and handle more complex projects, whose professional camera or color panel will you buy? Theirs. The software isn't the product; it's the gateway to their highly profitable hardware.

"...this misconception that DaVinci Resolve's free version is cheap or lacks features—clear it from your mind. Blackmagic Design has made Resolve free because they want you to get a taste of their ecosystem. So that when you grow, you purchase their high-end equipment."

2. It's Not Just a Color Tool; It's an All-in-One Studio

DaVinci Resolve earned its fame as the undisputed king of color grading, used on countless Hollywood films and Netflix series. This has led to the widespread misconception that it's only a color tool and that you need other software for "real" editing. This couldn't be further from the truth.

Resolve is a complete, integrated post-production suite. Instead of forcing you to jump between different applications, it houses distinct, full-featured modules called "pages" within a single program:

  • Edit: A powerful, traditional non-linear editing (NLE) timeline for all your cutting, trimming, and sequencing needs.
  • Color: The legendary, node-based color grading environment that professionals rely on.
  • Fusion: A full-blown motion graphics and visual effects compositor, comparable to Adobe After Effects.
  • Fairlight: An advanced digital audio workstation for professional audio mixing, repair, and sound design, similar to Adobe Audition or Logic Pro.

This all-in-one architecture is a massive workflow advantage. With a competitor like Premiere Pro, you need separate, costly subscriptions for After Effects (for graphics) and Audition (for audio). With Resolve, it's all in one place, which not only saves money but often leads to greater stability and performance. Many editors, myself included, switch to Resolve to escape the frustration of crashes and performance bottlenecks that can arise from round-tripping between separate applications.

3. You Don't Need a Supercomputer to Run It

Another myth that scares beginners away is the idea that professional software like Resolve requires a beast of a machine to run smoothly. While a more powerful computer (a 6-core CPU and 16GB of RAM are recommended for 4K) certainly helps, Resolve has a brilliant built-in feature specifically designed for those with older or lower-spec computers: a Proxy Workflow.

Here’s how it works: Resolve can automatically create low-resolution, small-file-size duplicates of your original high-resolution footage. These lightweight files are called "proxies."

You perform all your editing—cutting, arranging clips, adding effects—using these smooth, easy-to-handle proxy files. Your computer never stutters or lags because it isn't trying to process massive 4K video files in real time. Then, when you're ready to export your final video, Resolve intelligently and automatically switches back to the original, full-quality source files. This workflow is becoming even more seamless, as many modern professional cameras can generate these lightweight proxy files for you at the same time you're recording, eliminating a step in post-production. You get the smooth editing experience of a high-end machine and the pristine quality of your original footage, even on a modest laptop.

4. It's "Future-Proofing" Your Editing Career

Learning any new software is a significant investment of time and energy. That's why choosing the right one from the start is a critical strategic decision for your career. This is where Resolve truly shines.

DaVinci Resolve is the undisputed market leader for professional, high-end color grading. There is simply no other industry-standard alternative for this level of work. As projects become more ambitious and client expectations rise, many editors who start on Premiere Pro or Final Cut Pro eventually find themselves needing Resolve's superior color tools. This forces them to learn an entirely new piece of software mid-career.

By starting with Resolve, you are investing in a platform that can grow with you. You can begin with the editing basics and gradually explore its world-class color, audio, and VFX capabilities as your skills and needs evolve. You won't hit a ceiling or be forced to switch platforms later.

"If in the future, we have to shift to DaVinci Resolve anyway, then why not start with DaVinci Resolve? It makes a lot of sense. Why waste our time on Premiere Pro or Final Cut when our future is to move to DaVinci?"

5. It Enforces Professional Workflow Discipline from Day One

At first glance, project organization seems boring. But as any professional will tell you, a scalable, efficient workflow is the key to profitability and sanity. Resolve includes two organizational features that force you into good habits from the start, solving major problems that editors on other platforms eventually face. Learning its structure early prevents you from having to unlearn bad habits later. These tools are Timelines and Power Bins.

Timelines: In most NLEs, a project is tied to a single format. If you want to create a vertical version of your YouTube video for Instagram, you have to create a completely new project. In Resolve, you can create multiple timelines within a single project, each with its own resolution and settings. This means you can have one timeline for your 16:9 YouTube video and another for a 9:16 vertical Instagram Reel, all accessing the same pool of media. No duplicating projects, no re-importing footage.

Power Bins: Think of a Power Bin as a universal, cross-project folder. Any asset you place in a Power Bin—your channel logo, intro animation, a frequently used sound effect, or a title graphic—is automatically available in every single new project you create. You never have to import the same common assets again. It's a simple concept that saves an incredible amount of repetitive work over time.

Conclusion: A Tool for Today and Tomorrow

DaVinci Resolve is far more than just a powerful color grading application. It is an accessible, incredibly efficient, and strategically sound ecosystem for any editor, from the absolute beginner to the seasoned professional.

The free version is not a gimmick; it’s a fully-featured invitation to a professional workflow. Its all-in-one design streamlines the creative process, its proxy workflow makes it accessible on modest hardware, and its industry dominance ensures you’re learning a skill that will remain valuable for years to come.

As you consider your next step, ask yourself this: Given that the lines between editing, color, effects, and audio are blurring, could an all-in-one approach be the future for all creative software?

Friday, November 28, 2025

I Took an Apache Kafka Crash Course, and It Broke My Brain in 4 Ways

 

I recently went down the Apache Kafka rabbit hole, and what I found completely changed how I think about system architecture. It all started with a simple question while tracking my lunch on Zomato: how on earth does that little motorcycle icon glide across the map so smoothly, in real-time, for thousands of users at once, without the whole system catching fire?

The technology behind it, Apache Kafka, isn't just another tool. It's a paradigm shift. It operates on a few surprisingly simple yet powerful principles that fundamentally challenge the traditional, database-centric way of building software. Here are the four biggest takeaways that broke my brain—in the best way possible.

--------------------------------------------------------------------------------

1. The "Zomato Problem": Why Your Brilliant Idea Will Crash Your Database

Here’s where my thinking was wrong. My first instinct for building a live-tracking feature would be a simple, two-step process: the delivery driver's app continuously writes its GPS coordinates to a database, and the user's app continuously reads from that same database. Logical, right?

Wrong. This seemingly sensible design is a recipe for disaster at scale.

Let's do the math. Consider that a platform like Zomato might have 200,000 users tracking orders concurrently. The source material I studied pointed out that a single food delivery, from the restaurant to your door, could generate up to 50,000 location updates over its journey. With that many users, the database is subjected to a constant, crushing load of potentially billions of read/write operations. As the course creator rightly puts it, the database becomes a bottleneck and is "100% going to crash."

This was my first 'aha!' moment: a database is not built for this kind of high-frequency, real-time communication.

"If so many users use this architecture, there will be too many database hits, and 100% your database is going to crash. The database is made for storing data, not for you to perform frequent read/write operations with live data."

2. The "YouTube Subscriber" Model: Kafka's Simple Solution to a Massive Problem

So, if you can't hammer the database, what do you do? Kafka’s solution is elegantly simple: the publish-subscribe model. The best analogy I heard for this is a YouTube channel.

  • A data sender (like the driver’s app) acts like a content creator. It publishes a message (a location update) to a specific channel.
  • Data receivers (the users tracking their orders) act like subscribers. They subscribe to that specific channel.
  • When a new message is published, all subscribers are notified automatically—just like getting a notification for a new video.

In Kafka's world, the data sender (the 'creator') is called a Producer, and the data receivers (the 'subscribers') are called Consumers. This decoupling of Producers from Consumers is the secret to its scalability. The Producer doesn't need to know who is listening; it just shouts its message into the void of a Kafka Topic (the 'channel'). The Consumers don't overwhelm the producer; they just listen to the Topic.

To take the analogy further, if a YouTube channel is a Kafka Topic, think of Partitions as different playlists within that channel. Kafka can write to and read from multiple partitions at the same time, which is how it achieves its incredible speed and parallelism.

"The same way we subscribe to a YouTube channel... As soon as I publish my video, all the receivers... who have subscribed to our channel... get that notification. This is the publish-subscribe model."

3. Kafka Isn't a Database—It's Your System's Central Nervous System

This naturally leads to a common question: Is Kafka just a fancy, glorified database? Absolutely not. This was my next major insight. They serve completely different, though complementary, purposes.

  • Kafka: A high-throughput communication system designed for handling continuous streams of events in real-time. Its job is to move data—fast. Think of it as the central nervous system, firing signals (events) across the body (your architecture).
  • Database: A system designed for efficient, long-term storage and retrieval of data. Its strengths are durability and the ability to run complex queries on that data.

This difference comes down to their core design. A database is optimized for "data at rest," using complex indexing and locking mechanisms to ensure consistency for queries. This creates overhead. Kafka is optimized for "data in motion," with extremely high throughput—meaning it can process millions of messages per second—because its job isn't to query the data, but to stream it efficiently from producers to consumers.

They work together beautifully. Kafka handles the intense, real-time flow of location updates. Once the event is complete (the food is delivered), a single bulk operation can save the final order details into a permanent database for long-term analysis. The key is to use the right tool for the job.

4. Designed for Failure: Kafka's Counter-Intuitive Approach to Reliability

For mission-critical systems at companies like Netflix and LinkedIn, things can't just break. Kafka's reliability stems from the fact that it is, by nature, a distributed system. Instead of running on a single machine, it operates as a coordinated cluster of servers (called brokers). This distributed architecture is the key to its fault tolerance, and it embraces a counter-intuitive principle: it plans for failure.

Kafka uses a "replication technique" to achieve this. When a Producer sends data to a topic, Kafka creates copies (replicas) of that data and distributes them across multiple brokers in the cluster. One of these copies is designated as the "leader," which handles all requests. If the server acting as the leader goes down—a hardware failure, a network glitch, anything—Kafka seamlessly appoints another server holding a replica to become the new leader.

The system doesn't skip a beat. No data is lost, and service continues uninterrupted. This anticipation of failure is what makes Kafka so durable and trustworthy for applications that absolutely cannot go down.

"If the leader... disappears... another is appointed as the leader, but the data is never lost because the replication technique is followed."

--------------------------------------------------------------------------------

Conclusion

The biggest conceptual shift that Kafka represents is moving away from a world where applications constantly ask a database for updates. Instead, it ushers in a new paradigm where applications subscribe to continuous streams of events and react to them as they happen. This "pull" vs. "push" model is what enables the highly scalable, resilient, and real-time systems that power our modern digital lives.

Now that you've seen how event streaming works, what part of your own project could be reimagined not as a simple database transaction, but as a continuous, real-time data stream?

Thursday, November 27, 2025

 

I Watched a 12-Hour Spring Boot Course. Here Are the 5 Takeaways That Blew My Mind.

Introduction: From Magic to Mastery

For many developers, especially those new to the ecosystem, modern frameworks like Spring Boot can feel like magic. With a few annotations and a single command, a full-fledged web server spins up, ready to handle requests. It's powerful, but it can also be opaque. What’s actually happening behind that @SpringBootApplication annotation?

To pull back the curtain, I dove headfirst into Faisal Memon's comprehensive 12-hour Hindi course on Spring Boot. My goal wasn't just to learn the "how," but to finally grasp the "why." This article shares the five most surprising and impactful concepts that revealed the logic behind the magic, transforming complex features from mysterious incantations into understandable, powerful tools.

1. Modern Frameworks Aren’t Magic; They’re Painkillers for Yesterday’s Problems.

The single most important mindset shift is understanding that frameworks are solutions to historical pain points. Before Spring, developers spent an enormous amount of time writing "boilerplate code"—the repetitive, foundational plumbing required for almost any application. I'd always taken this for granted, but seeing the pre-framework code laid out made me appreciate the sheer volume of repetitive, error-prone work I never have to do.

The course brilliantly illustrates the tasks that Spring now handles automatically, which developers once had to build from scratch for every single project:

  • Database Connectivity: Writing hundreds of lines of code just to establish a database connection, manage a pool of available connections to avoid overwhelming the database, and handle opening and closing them correctly.
  • Security: Manually setting up authentication and security protocols to protect application endpoints.
  • HTTP Handling: Writing low-level code to parse incoming HTTP requests, read their headers and bodies, and correctly format HTTP responses.

Spring doesn't eliminate these tasks; it automates them. It provides a pre-built, robust solution for the problems every developer used to solve over and over again. Seen this way, frameworks aren't magic—they’re powerful painkillers that automate the repetitive parts of the job, freeing us to focus on the unique business logic that actually delivers value.

2. To Truly Master the New, You Must First Understand the Old.

One of the most effective pedagogical choices in the course was the insistence on teaching older, manual technologies before introducing their modern abstractions. We learned JDBC before Spring Data JPA, and we configured Spring beans with XML before ever touching an annotation.

At first, this felt like a detour. Why learn a legacy approach we might never use? The instructor's reasoning was a lightbulb moment:

XML config is still used in many projects... if you join a company... older code worked this way, so you might have to look at that code and not get scared by it. That's why it's important to learn.

This approach is crucial for two reasons:

  1. When your high-level tool fails or "leaks," understanding the underlying mechanism is the only way to debug it effectively.
  2. The professional world is filled with legacy code, and the ability to confidently navigate an older XML-configured project is an invaluable skill.

3. "Loose Coupling" Is the Secret Sauce, and It's Simpler Than You Think.

If you've ever wondered why developers seem obsessed with interfaces in Spring, this example is the answer. It's the secret sauce behind the framework's legendary flexibility. The course demonstrated this with a crystal-clear NotificationService example that I'll share here.

Imagine a UserService that needs to send an email when a new user signs up.

  • The Tightly Coupled Approach: The UserService directly creates an EmailNotificationService object within its own code (new EmailNotificationService()). This works, but what happens when the business wants to send an SMS instead? You have to go back and change the code inside UserService. The two classes are stuck together.
  • The Loosely Coupled Approach: Instead of depending on a concrete class, the UserService depends on a NotificationService interface. This interface simply defines a send() method. Now, you can create separate EmailNotificationService and SMSNotificationService classes that both implement this interface. You can pass either of these objects to the UserService, and it will work perfectly without a single code change.

This ability to swap implementations without breaking the classes that depend on them is the core of Spring's flexibility. This powerful technique is an implementation of the Strategy Design Pattern, and it’s the foundation upon which features like Dependency Injection are built.

4. The Incredible Journey from Raw SQL to a Single Method Call.

The instructor walked us up what I can only describe as an "abstraction ladder," and with each step, a new layer of historical pain simply vanished. Watching the evolution of data access in Java was like seeing decades of engineering brilliance compressed into an hour.

  1. Step 1: Raw JDBC. This is the ground floor. You write SQL queries as raw strings in your Java code. You manually create PreparedStatement objects, execute them, and then iterate through a ResultSet row by row, manually mapping each column's data to a field in your Java object. It’s incredibly verbose, tedious, and prone to typos and SQL injection vulnerabilities.
  2. Step 2: The ORM Revolution (Hibernate). Object-Relational Mapping (ORM) tools like Hibernate were created to kill the pain of JDBC. Instead of writing SQL, you map your database tables directly to Java objects using annotations like @Entity. Hibernate handles the SQL generation for you, turning the messy ResultSet iteration into clean object interactions.
  3. Step 3: Standardization with JPA. With multiple ORMs on the market, the community created the Java Persistence API (JPA). JPA is a standard specification—a set of interfaces and rules. Hibernate is simply one implementation of that standard. By coding to the standard JPA interfaces like EntityManager, your code becomes more portable and isn't tied to a specific ORM tool.
  4. Step 4: The Ultimate Abstraction (Spring Data JPA). Spring Data JPA sits on top of JPA and provides the final, most elegant layer of abstraction. It completely removes the need to write any data access implementation code. You simply define a repository interface, and Spring automatically provides the implementation at runtime. Performing complex database operations becomes as simple as calling a pre-built method like .save(user) or .findAll(). The journey from hundreds of lines of manual JDBC to a single method name in an interface is the entire story of modern data access in a nutshell.

5. Spring Boot's Real Superpower Is Its "Opinions."

What's the real difference between the Spring Framework and Spring Boot? I finally understood it when the instructor explained Spring Boot's "opinions." Its core philosophy is "Convention over Configuration," which means it comes with strong, sensible defaults on how a modern application should be built.

The older Spring Framework required developers to manually configure almost everything. Spring Boot flips this on its head. The classic example is the web server. When you add the spring-boot-starter-web dependency to your project, Spring Boot makes an assumption: you are building a web application. Based on this, it forms an opinion: you will probably need a web server, and a good default choice is Tomcat running on port 8080. It then automatically configures and runs an embedded Tomcat server for you. No XML, no server setup. It just works.

You only need to write configuration when you want to disagree with one of Spring Boot's opinions. Don't want port 8080? Simply add server.port=9000 to your application.properties file to override the default. This is the framework acting as a painkiller for its own past complexity, solving the configuration pain that the original Spring Framework created.

Conclusion: Look Under the Hood

The biggest takeaway from this deep dive is that the "magic" of our modern tools isn't arbitrary. It's a series of brilliant, layered solutions built to solve the very real problems of the past. By taking the time to understand the history and the underlying principles, we don't just learn to use a framework; we learn to master it.

Understanding the "why" behind our tools is the key to true mastery. What piece of 'magic' in your tech stack will you explore next?

Wednesday, November 26, 2025

 

5 AWS Serverless Concepts That Will Change How You Think About Infrastructure

Introduction: The Serverless Shift

For years, building applications meant a constant focus on the underlying hardware. We provisioned servers, patched operating systems, configured networks, and meticulously planned for scalability. This foundational work was necessary but often distracted from the primary goal: writing code that delivers real value to users.

"Serverless" computing represents more than a technology; it’s a new philosophy for building software where the currency is value, not uptime. It’s a model that allows developers to truly "run code without thinking about servers," concentrating on their application logic while the cloud provider manages the complex infrastructure underneath. This article explores five of the most impactful and sometimes counter-intuitive ideas from the world of AWS serverless that will shift your perspective.

1. The Big Misconception: Serverless Doesn't Mean "No Servers"

The term "serverless" can be misleading. Servers are still very much involved; the revolutionary change is that you are no longer responsible for managing them. AWS handles all the underlying infrastructure, including the operating systems, patching, capacity provisioning, scalability, and high availability.

This abstraction is incredibly powerful. By offloading the undifferentiated heavy lifting of infrastructure management (the foundational, non-unique work required for any application), development teams can stop worrying about maintaining servers and spend more time on innovation. It allows companies to focus their resources on creating features that serve their customers, rather than on the plumbing that makes the application run.

"...serverless basically means that the compute services, the application integration services, database services, and a whole series of different functionality is delivered to you without you ever having to worry about managing underlying servers, like Amazon EC2 instances, operating systems, patching — all those things we used to have to do."

2. The Economic Revolution: Pay for Execution, Not for Existence

The traditional cloud model involves paying for resources as long as they are running, whether they are actively doing work or sitting idle. The AWS Lambda pricing model turns this on its head. Cost is calculated based on two simple factors: the amount of memory assigned to a function and its exact execution duration, measured in milliseconds. Crucially, depending on the amount of memory you assign, Lambda allocates a proportional amount of CPU power, directly linking cost to performance.

This leads to a powerful and counter-intuitive reality: a deployed Lambda function that isn't running costs absolutely nothing. You can have complex application code ready and waiting, but you don't pay a cent until an event actually triggers it to execute.

"At this point in time, nothing is happening, and we're not spending any money. We've created a function, but it's not actually running, so we're not paying anything."

This pay-for-value model fundamentally de-risks experimentation and innovation. Startups and individual developers can build and test powerful applications at minimal cost, freeing them from the financial commitment of maintaining expensive, always-on infrastructure. An idea that doesn’t get traction costs nothing to keep deployed, encouraging a culture of rapid, low-stakes creation.

3. The Domino Effect: How Event-Driven Architecture Works

Serverless applications are often built using an event-driven architecture, a pattern where "an event that happened in one service triggered an action in another service." Instead of components being tightly coupled and calling each other directly, they are designed to react to events as they occur, setting off a chain reaction of automated processes.

A common example illustrates this concept perfectly:

  • A user uploads a file to a static website hosted on Amazon S3.
  • This upload event automatically triggers an AWS Lambda function to process the file.
  • The Lambda function then sends a message to an Amazon SQS queue for further processing and sends a notification to an Amazon SNS topic to alert an administrator via email.
  • The arrival of the message in the queue event triggers a second Lambda function, which stores the processed results in a DynamoDB table.

This approach creates incredibly resilient and scalable applications. Imagine an e-commerce site where the web servers (the web tier) take orders and pass them directly to the application servers (the app tier) for processing. If a sudden marketing campaign causes a massive spike in traffic, the web tier might overwhelm the app tier before it has time to scale up. In this direct integration model, those excess orders could be lost forever. By placing a queue between them, the web tier can place thousands of orders into the queue, and the app tier can process them at its own pace. No orders are lost, and the system gracefully handles the spike, ensuring every customer interaction is preserved.

4. The Invisible Front Door: Building APIs Without Servers

Building a robust, public-facing API for an application traditionally required setting up and managing a fleet of web servers. With serverless, this is no longer necessary. Amazon API Gateway acts as a fully managed "single endpoint" for your application, serving as the front door for all incoming requests from the internet.

Imagine a mobile application that needs to interact with several distinct microservices: a booking service, a payment service, and an account service, each potentially running on different technologies like AWS Lambda or ECS containers. Instead of the mobile app needing to know the individual address of each service, API Gateway provides a single, public URL. The gateway receives all API calls (like GET or POST requests) and intelligently routes them to the appropriate backend service based on the request path, like /booking or /payment.

This capability dramatically simplifies building microservice-based applications. The mobile app only needs to communicate with one URL—the API Gateway endpoint. The gateway then acts as a traffic cop, handling the complexity of directing requests to the correct backend service, keeping the overall architecture clean, manageable, and easy for developers to innovate on.

5. Beyond Web Apps: Serverless for Governance and Automation

While often associated with web applications and APIs, serverless functions have a powerful, and perhaps surprising, use case in automated cloud governance. You can use services like Amazon EventBridge and AWS Lambda to act as automated security and compliance guards for your cloud environment.

Consider this practical example for enforcing a cost-control policy. An Amazon EventBridge rule—which is surprisingly easy to set up using a built-in wizard—can be configured to watch for the specific event of any EC2 instance entering the "running" state.

  • This event automatically triggers a Lambda function.
  • The Lambda function's code inspects the instance's details to check its type.
  • If the instance type is anything other than the approved t2.micro, the function automatically issues a command to stop it.

This is a game-changer for cloud operations. It's not just about running application code; it's about using serverless automation to enforce critical security and cost-management rules "almost instantly." This ensures compliance across an entire AWS account without requiring manual monitoring or intervention, freeing up engineering time for more valuable work.

Conclusion: A New Way of Building

Serverless is more than just a collection of AWS services; it is a paradigm shift that fundamentally changes the relationship between a developer and their infrastructure. By abstracting away the servers, rethinking the economic model, and enabling powerful event-driven automation, it allows builders to spend more time on innovation and less time on operational overhead. It empowers teams to focus purely on creating value that differentiates their business.

What could you create if you never had to patch an operating system or worry about scaling a server again?

Featured Post

Java Method Overriding: 3 Counter-Intuitive Rules You Need to Know

  Introduction Method overriding in Java seems straightforward at first glance. You have a method in a parent class, and you create a more s...

Popular Posts