Blog

Analyzing High Memory Usage With awk

I was recently doing some investigation into high memory usage for a Magento Commerce client of ours and found myself working closely with a really useful tool called awk for the first time. One of our Tech Leads had previously done some analysis of overall memory usage in the system following an outage, and noticed that their hosting company’s support had called out a few requests that appeared to be eating up a ton of memory during this time:

2019-06-12 20:52:16 GET 500 173138.361 ms 420108 kB 17.56% /banner/ajax/load/?_=1560372733339&sections=
2019-06-12 20:52:29 GET 500 162652.213 ms 452908 kB 18.59% /banner/ajax/load/?_=1560372685651&sections=
2019-06-12 20:49:02 GET 500 366376.036 ms 1191096 kB 25.09% /banner/ajax/load/?_=1560372475962&sections=

It was my responsibility to take a closer look at these to see if there was any real cause for concern here, however, I didn’t initially have a lot of information to go off of (hosting had just simply sent us the above with a note that we should take a closer look). My first goal was to figure out where these requests were even logged to understand what I was looking at. At the time of investigation the outage had occurred a couple weeks prior, so I needed to use something like zgrep to dig through the rotated logs. I ended up using the requests total execution time as the string to match as it seemed like the most “unique” string to use (that “_” value used by jQuery for cache busting ended up popping up more times than I’d expected):

$ zgrep '173138.361' /var/log/xxxxxx/xxxxxx/*log.{15..19}.gz
/var/log/xxxxxx/xxxxxx/php.access.log.17.gz:2019-06-12 20:52:16 GET 500 173138.361 ms 420108 kB 17.56% /banner/ajax/load/?_=1560372733339&sections=

$ zgrep '162652.213' /var/log/xxxxxx/xxxxxx/*log.{15..19}.gz
/var/log/xxxxxx/xxxxxx/php.access.log.17.gz:2019-06-12 20:52:29 GET 500 162652.213 ms 452908 kB 18.59% /banner/ajax/load/?_=1560372685651&sections=

$ zgrep '366376.036' /var/log/xxxxxx/xxxxxx/*log.{15..19}.gz
/var/log/xxxxxx/xxxxxx/php.access.log.17.gz:2019-06-12 20:49:02 GET 500 366376.036 ms 1191096 kB 25.09% /banner/ajax/load/?_=1560372475962&sections=

Sweet. So at this point I had a better understanding of what support forwarded us. These are from the php-fpm logs on the server, and what we’re looking at is the amount of memory that php-fpm used on each request. That’s great and all, and I could take a look at the controller for these requests to get a better understanding of what’s going on at the PHP execution level (I did in fact do this later), however I was hoping to find a way to dig into the historical data to get an understanding of why and how often these requests were performing poorly. We currently use New Relic as an application performance monitoring tool with the vast majority of our clients (I even wrote up a bit on the SD blog about getting certified here), so my first instinct was to use that for analyzing the metrics. Unfortunately, it looks like because the system was going haywire and running out of memory at this time, very little data had actually been gathered and reported to New Relic. Additionally, New Relic hadn’t traced any specific banner/ajax/load transactions prior to or after the outage, as the execution times hadn’t been terribly slow. So now what?

Well, maybe I couldn’t get a sense why the requests had used a ton of memory, but I did have access to this php-fpm logs to see just how often this was happening. But how was I supposed to look up all of this data across weeks of compressed log files, extract what I needed, then run some statistics on the data (even if it was just basic operations)?

I’d previously seen awk used as a part of more complex chained commands in some of our other post-mortem analyses (and if you’re not documenting stuff like this in your post-mortems, I’d HIGHLY encourage you to start), so I decided to take a closer look at awk and see just what I could get out of it.

Let’s start with a simple example below:

$ zgrep 'banner/ajax/load' /var/log/xxxxxx/xxxxxx/php.access.log.17.gz | awk '{ print $7 }' 

With this, I’m simply greping for every request to /banner/ajax/loadin a compressed log file, piping the output to awk and pulling the 7th column of text, “columns” here defined as strings separated by whitespace. It’s important to note that unlike almost everything else in Computer Science, the columns in awk are not 0-indexed (go figure). Simple enough, yeah?

So now we have tens of thousands of lines of memory usage to look through. But what if we just want to look at the worst cases for this day? Well then, let’s just sort and grab the last few with sort and tail

$ zgrep 'banner/ajax/load' /var/log/xxxxxx/xxxxxx/php.access.log.17.gz | awk '{ print $7 }' | sort -h | tail -5
91404
195968
420108
452908
1191096

Awesome, so that was pretty easy, and now we see that there were really 4 requests that appeared to be using an absurd amount of memory on the day of the outage. Let’s look at a wider span of time though and see if we can get a better sense of how often this is happening:

$ zgrep 'banner/ajax/load' /var/log/xxxxxx/xxxxxx/php.access.log.{1..21}.gz | awk '{ print $7 }' | sort -h | tail
85320
85320
85332
89428
91404
91404
195968
420108
452908
1191096

$ zgrep 'banner/ajax/load' /var/log/xxxxxx/xxxxxx/php.access.log.{1..16}.gz | awk '{ print $7 }' | sort -h | tail
83212
83272
83284
83284
83284
85260
85320
85320
85332
89428

So at this point I’m feeling pretty confident that this isn’t a typical issue. But is 89428 kB still a little too high for this request? Let’s try to answer that question, and again turn to awk for some help. We’re able to compute and return an average for this column by passing something like {n++;sum+=$x} END {print n?sum/n:0} to awk:

$ zgrep 'banner/ajax/load' /var/log/xxxxxx/xxxxxx/php.access.log.{1..30}.gz | awk '{n++;sum+=$7} END {print n?sum/n:0}'
8813.21
$ grep 'banner/ajax/load' /var/log/xxxxxx/xxxxxx/php.access.log | awk '{n++;sum+=$7} END {print n?sum/n:0}'
8772.13

Nope! From what I was able to gather using the above, it looked like this wasn’t any sort of recurring issue we needed to watch out for. With the above data, some additional analysis on system metrics at the time of the outage, and a little further digging into the Magento source code, we were able to successfully determine that these requests were really just the result of an overloaded system that didn’t have a great PHP memory_limit set. We’re currently engaging Magento and working with hosting to find a safer memory_limit that will make sure individual requests aren’t able to use over over 1G of memory.

5 years and 3 majors

Whenever I talk about what I’m studying here at Syracuse, I often get puzzled looks from people. “So… what do you want to actually do with that?” is a pretty common follow up question. My answer, “Software Engineering with a focus on Artificial Intelligence, most likely machine learning or artificial neural networks”, doesn’t seem to really interest most. But I think the answer to a different question, people would find a little more interesting: “How did you end up here?”.

Back in high school things were very different for me (as they were for most, I’m sure). I had long colored hair, played in orchestra (as well as a punk band), and knew absolutely nothing about code. I was, however, fascinated with humans; the way people interacted with one another, the way personalities changed in different environments, and most of all what was happening behind the scenes that drove people. I wanted to understand what was happening on a biological level when I would listen to a song in the morning and consequently have it stuck in my head the rest of the day. I wondered why certain study techniques would actually help me remember information better than others. As soon as I was able to take the AP Psychology course my school offered, I added it to my schedule. Mr. York, our awesome Psychology teacher, soon taught the class the basics of cognition – thought processing, memory, etc. – and I was certain that I had found my future education and career in Cognitive Psychology. I applied for colleges and selected Psychology as my major, knowing that I eventually wanted to get my PhD in Cognitive Psych and essentially research and be a part of Academia for the rest of my life.

Through chaos of the admission process I found myself accepting a spot at Syracuse University as a part of the graduating class of 2016 (Go Orange!). I moved 2,500 miles and joined the Psychology in Action learning community. I met with a peer and academic advisor, expressed my passion for Human Cognition and research, and found myself on the path for the research-oriented Bachelor of Science Psychology degree. With all of the AP credit I came to college with, I had the option of adding a second major or graduating a year early. Naturally, I couldn’t see the harm in picking up a second major. If anything it would help me stand out as a candidate for jobs, graduate school, research opportunities, etc. After giving it some thought, I concluded that Computer Science was easily the best option. I had always been good with math and problem solving, I had a better hold on computers and technology than a lot of my peers, and I knew that if things didn’t work out with Psychology, there would be better opportunities for me if I knew how to program. Most of all, what better way to understand the networks and underlying systems of the brain, than to apply concrete, mathematic models to them?

I began taking my Computer Science courses alongside working in a Behavioral Neuroscience lab sophomore year. This was really the first time in my life that I had ever programmed. Freshman year I had an interest in learning how to build websites and learned very basic HTML and CSS, but that was nothing compared to the data structures and algorithms I began to learn. I struggled at first, some of the logic and debugging really frustrating me. Alas, I programmed my way through my first few classes and began to see the potential for the tools that I had in front of me. I could finally begin to make sense of the world around me in concrete ways. I found myself losing interest in the things I was learning in my Psychology courses. The topics began to feel a little repetitive, and I didn’t feel comfortable with the idea of applying generalizing theories to all (or most) human individuals. I realized that I had always been curious about the finite interactions happening at the neural level – the details.

The end of sophomore year also threw another curve ball at me. I had been accepted into an immersion program through the iSchool called EntreTech NYC. For a week in May, I would be able to get an up close and personal look at large companies as well as startups in the fast growing tech scene taking place in New York City. Always dreaming of a future in New York City and seeing this as an opportunity to make important connections, I went on the trip having no idea what I was about to embark on. Through a near sleepless week, I was exposed to the intense business revolution happening down in the “the city”. I was exposed to various stories (the founder of Constant Contact literally going from rags to riches) and ways that these companies would change the world. After quite the overwhelming week, I knew that I no longer had an interest in psychological research within academia. I now felt as though literally anything was possible with an ability to write robust code and a drive to never quit. I could research ways to provide a pleasing and intuitive User Experience. I could analyze user behavior using big data. I could model software to learn like humans. I could write software to analyze and map brains, or handle Human Computer Interaction. Whatever it was, I could do it with a computer  – and the best part was that I was allowed to incorporate my interest of how humans worked.

The 2 years since have been quite interesting for me. I picked up a job working as a web developer for my university (Spiders), I interned with Makerbot last summer, I narrowed down just what I want to do with software, and I’ve added a third major. The decision for adding Neuroscience as my third major was not difficult. The Computer Science requirements were set for me such that I was going to graduate a semester late already, and adding the Neuroscience ILM at this point would have only required that I take 3-4 more classes. Above all else, I would now truly be getting a more detailed look at what was happening within certain systems in the brain. And while I’m not entirely thrilled that I have to stay in school for an extra year, I figure that this is a chance that I just can’t pass up.

The way that I see it, college is a very brief but important period of discovery. You’re given ample opportunity to learn by discovering much about yourself and the world around you. So why not take advantage of all of the opportunities and resources I have for a little bit longer? Why not take on 3 majors and explore interests that most would not see any connections between? When attempting to grow and learn, it is important to ask yourself, “why not”?