Bash Scripting Question

Anyone know of a way to delete large sections of text from a large text file?

sed doesn’t like me today.

I’ve got a large json file and I want to extract a specific dictionary from somewhere near the middle. I’m trying to start by eliminating everything before and after and I trying to script this so that I can automate this process.

Assume that there are 100 lines in your file. You can get the 80 lines in the middle and write them to a new file by using head and tail like this:

cat file.json | head -n 90 | tail -n 80 > newfile.json

You can get the number of lines in file with wc:

cat file.json | wc -l

These may or may not helpful you. Be careful not to overwrite anything important with the first command.

There’s also a tool called jq, which is a command line JSON processor, which may be helpful to you if you are playing with JSON and Bash.

3 Likes

Thanks for the reply.

The file I am working with has a little over 18,000 lines. Since I am trying to automate this process, I don’t know which line numbers the section I need will be located at.

Thanks for the tip on jq. I’ll look that up right now.

If you know the content of the section, you can use:
grep -n search file.json

This shows the line numbers in front of the searches. Then you can use tkk’s code with the appropriate line numbers.

I’d write python script. Once you learn basics of working with files, it should go smoothly.

@GeBo Line numbers cannot be used as I am trying to automate the process and the contents of the file will not always be the same.

@Kikuchiyo I am a python developer, but I’m trying to grow my bash skills. I may go ahead and do this in python and then come back and revisit this in bash as a learning project.

Have you got a sample file we can work on?

I should have replied back to this thread. A deadline came up and I ended up using Python to massage the file.