Here’s a bash script that reads an input file and calculates the sum. A file containing a couple of records is an input for this script. The script reads this file using While (loop) and checks for a specific string; If found, further, it gets a substring of numbers for some calculation.
Reading file and calcuate sum
- The read – r command reads the file. The While loop reads till the end of the file. Added if logic to test the string.
- The sed utility formats the input record as desired. When the matched string is received, it will then get a number (used substring concept). Then, the echo statement displays the calculated sum.
Input file
app1:
cpu:1500m
mem:10
app2:
cpu:200m
mem:20
Script logic
The result would be the sum of app1 memory and app2 memory. It gives 10+20=30.

What it does?
- It replaces the ‘m’ that is found in the cpu. Also, it calculates the sum of the mem of app1 and app2 values.
- The read -r line reads input. The file name <inputfile.txt works as an input to the While loop. The sed output, it writes to Output.txt file.
- When the string matches to ‘mem’ it gets a number from it. This is later will use it to calculate the sum.
- This is a tricky shell script and may ask in interviews.
- The >> says it appends to output.txt file. If you use > each time it overwrites.
Result
Here’s the output from this script. You can see a display of the sum. Also, you can see the output file details without ‘m’ for cpu value.

Related posts
-
Why DELETE with Subqueries Fails in PySpark SQL (And How to Fix It)
Learn why PySpark SQL DELETE with WHERE IN subquery fails and how to fix it using DELETE USING, Delta tables, and join-based deletes.
-
GitHub Features & Settings Explained: The Ultimate GitHub Options Guide
GitHub options explained in detail. Explore GitHub features, settings, and best practices to manage repositories and workflows effectively.
-
Ingesting Data from AWS S3 into Databricks with Auto Loader: Building a Medallion Architecture
In this blog post, we will explore efficient methods for ingesting data from Amazon S3 into Databricks using Auto Loader. Additionally, we will discuss how to perform data transformations and implement a Medallion architecture to improve the management and processing of large datasets. What is the Medallion Architecture? The Medallion architecture is a data modeling…







You must be logged in to post a comment.