326: Dongle Life

Episode 326 · February 15th, 2022 · 41 mins 2 secs

About this Episode

Chris is making hiring progress and loves asdf and M1 laptops. Steph is anticipating the arrival of one dongle to rule them all and talks about moving away from having a lot of Bluetooth connections.

Two other big things on Steph's mind are education around factories because they're v important and shared examples and how they can be overused. She and Chris agree that it is better to tell stories in tests instead.


This episode is brought to you by ScoutAPM. Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy.

Services down? New Relic offers full stack visibility with 16 different monitoring products in a single platform.



Become a Sponsor of The Bike Shed!

Transcript:

STEPH: Hello and welcome to another episode of The Bike Shed. [laughs]

CHRIS: Hello, and I'm singing, and I love singing.

STEPH: It's Buddy the Elf; what's your favorite color? [laughter] For reals, here we go.

Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari.

CHRIS: And I'm Chris Toomey.

STEPH: And together, we're here to share a bit of what we've learned along the way. So hey, Chris. What's new in your world?

CHRIS: My world continues to be focused on hiring as a pretty core aspect of things. We have happily had one offer extended and accepted, so that's great. We've got a person who will be joining the team in a couple of weeks. That's very exciting. And we're continuing in conversations with some other folks.

So I look forward to the place where I can be on the other side of this and have that team and be growing the team and not having to focus because hiring takes a lot of effort. It is something that I believe should be done as well as possible and intentionally as possible and then just outreach and all that. So yeah, I'll be fine with being on the other side of that. But it's going well, so that is nice.

STEPH: That's awesome that you're making progress. Once you have hired your team, will you then add to the agenda to hire someone to help with hiring?

CHRIS: I don't actually know if the organization, if the whole company has someone who's focused on hiring. I think that can make sense. Working through recruiters and things like that is something that I've seen in the past. I've seen it work for certain organizations.

I've also been on the receiving end of plenty of obviously copy and pasted very generic "Hey, person, I saw that you do lots of Java and other enterprise code software. Would you like to come work with us?" I'm like, none of those are true, and I do not want to go work with you. But thanks, I still appreciate the outreach. [laughs] So I am intrigued to see how we think about it.

More generally, this is something that you and I have talked about offline but the idea that you kind of always want to be hiring. We do have specific roles that we've identified that the budget has space for. But more generally, ideally, we're going to need to hire more people down the road, and that will happen at a particular point. But having those conversations, starting to talk to people, now planting the idea of like, hey, you're great, and I would love to work with you someday and just keeping those lines of communication open.

Networking is perhaps what the people call it. I don't know; I've never felt super comfortable with that word, but I think it's that and being friendly and staying connected with people whose work I respect and would love to work with more. So that's part of what I will come out of this with is yeah, let's always be hiring in a certain sense.

STEPH: I'm glad you expanded on it because I was just thinking I have specific ideas as to what always be hiring means to me and what those activities would include. So I was curious what it means to you. And I agree, I think it's a lot of networking. It's a lot of taking chats and social chats with folks and just talking about the company and finding out where they're at. And then one day, if it works out that then they want to make a shift, then you've already got that relationship that started, and they're already potentially interested in your team.

I guess some of the other big stuff that comes to mind, too, is like thoughtbot we have the blog. I feel like that's always really helpful too. Like when you help somebody, when you publish information that then helps them in their career, I feel like that will then draw people towards you as well.

CHRIS: Yeah, the thoughtbot blog and basically everything that thoughtbot does, the podcast here, or Upcase, or all those things were so incredibly helpful in the hiring. But I also know they're hard to spin up, is what I would say. The thoughtbot blog has I don't even know how many hundreds of thousands of hours maybe. It's weird to try and put a number to it.

But I've written a handful of posts for it, and I'm not great at writing them. They take me way longer than they should, but they took many hours. And then I had wonderful peer review by other developers at thoughtbot. And so, the amount of effort that goes into the thoughtbot blog absolutely produces wonderful benefits. But it's not free by any means, and similarly, the podcasts or Upcase or any of those sort of things.

Similarly, the one that's actually most interesting that I see a lot of organizations go for initially and then often walk back is open source. Like, oh, we have this internal library that we built to do something. What we'll do is we'll just package it up and share it with the world, and then it'll be great.

And the maintenance burden and support necessity of an open-source project is so high. I've actually historically gotten into the mode of suggesting...when I was working with clients, they would start to mention this and be like, "Oh yeah, we think we'll open source this thing, and it'll be great." I'm like, "Are you sure, though? Do you definitely want to?"

There's definitely a difference between open sourcing and just putting an idea out there is one thing that I would say. Can you just write a blog post that has code snippets but not reusable code that you have to maintain that people, unfortunately, I think unfairly expect responsiveness and maintenance over time? And what if you stopped using that technology? What if you stop using this thing, but your name is still attached to it? And people have expectations of what that looks like.

Or people come in and say, "Hey, this is great, but I want to change it in this way." And you're like, "Yeah, but that actually doesn't work for us. That's not how we use it. But we would be on the hook to maintain that code if we accept your pull request." And so, as wonderful as open source is, I tend to be on the more conservative end of the spectrum of like, are you definitely sure you want to open source this? Is there another way that you can share this with the world? Can it be a conference talk, or a blog post, or something like that? But it is an interesting one.

STEPH: Yeah, I've been a part of several teams that have started with that; let's start an engineering blog. And their hearts are totally in the right place, and I understand why they want to do it. But like you just said, there's a cost to that. And if you don't have something like thoughtbot has like an investment day or a time for engineers to then be able to contribute to that blog, then either they're just not because they're not going to have their downtime to be able to do that. And it is hard to write and publish and be happy with what you're going to publish with the world.

I really like what you're talking about in terms of the maintenance burden because I can't remember if it was an Upcase conversation or if there was something...but I was early on at thoughtbot and had a similar thought of why can't we just open source it? Why can't we make it public? And there was a very big thoughtful discussion around well, we have to have all these considerations in place. Who's going to maintain it?

Just like FactoryBot is a really big internal project at thoughtbot. And there's typically a rotation of folks who will then take ownership and then onboard other people who are interested in it and curate the issues. And it's very important work, but you have to allocate time for it. All of that to say, I totally agree. There's a big burden that goes with it.

CHRIS: Yeah, it's interesting that this has been an evolving thought in my head, and it makes me sad is another thing I'll say about it. I wish it were easier to just put code out there in the world and have the expectations properly calibrated for like, hey, I did this thing. Here's a code sample. It worked for me.

Actually, I found dropping something in a Gist...a Gist just has a point-in-time connotation to it that I like. Like, if I see a code sample in a Gist, I'm like, I have no expectation that that person is going to do anything or respond to anything I have to say. But this is great because I now have this sample code that helps me get a little bit further.

And I may have to vendor that code or take it on myself, and I now own it. It's not this person's responsibility. But the minute you have a repo with a README that says stuff and like, here are the installation instructions, the expectations just flip in a way that I don't think is...at least I become cautious around. And that does make me sad, though.

STEPH: Yeah, it feels like you went from offering an example to I'm offering a product. And so then as soon as people feel like, oh, you're giving me something as a product that you maintain, then I'm going to have higher expectations of it should work how I expected it to work. I'm going to ask questions. And yeah, you make a lot of good points.

CHRIS: Would you like to pay me $0 for me to build software for you? That sounds fun.

STEPH: [laughs]

CHRIS: And open source is such a wonderful thing. And so I'm interested in...like, I follow a lot of folks who are in the open-source world and have found ways to make it make sense financially or otherwise or organizationally. Open Collective and things like that is one option or OpenCore and then paid pro models and things like that like Sidekiq as an example. Sidekiq just celebrated ten years with some wild numbers in terms of the revenue, and it's like, yeah, that's fantastic. This is a cornerstone piece of software in the Ruby and Rails community. And also, Mike Perham had a great outcome from it. I think that's a win.

So maybe blogging, maybe, but not sure. Probably not open source is my suggestion, at least for me. But one thing that I am interested in that hasn't been an option in my mind for a long time, but I'd love to get back to is conferences and going there, especially with a small team from an organization. The three developers we go, and we hang out at a conference and the company has a space there. And there's room to have conversations and meet people. That is one that I would love to continue in a way of making sure that our name is in people's minds as a place that they could work in this world.

It is interesting, though, that it gets scoped a little bit like we are definitely a Rails shop. But that's not all that we are, or that's not the complete totality of our technical identity, so it becomes interesting. But I think it's probably the most representative. And I definitely see the Ruby and Rails community is having a good product-centric mindset that is definitely the sort of thing that I want in the teams that I'm building.

STEPH: Yeah, I think that's an awesome idea because it's a way that you could focus on creating content. It'll likely have a big impact. But then you can also replay that content, but it's not the commitment of a blog or a commitment of open source.

CHRIS: But yeah, so hiring has been, I would say, most of what I've been doing. One other thing that was fun this week, so I have my new laptop that I've had now for a couple of months, I'd say. And just this week, we had a very frustrating issue where Heroku stopped deploying our application. Just one day, it was like, nah, it doesn't work anymore. And I was like, well, that's less cool than I want it to be.

And so one of the developers on the team dug into it, and it turned out Node-sass was the answer, which we're not even using is extra unpleasant. It's just part of Sprockets and Webpack or something like that. There's some downstream dependency sequence. We're using Tailwind and PostCSS. So we don't even need Node-sass. I think maybe PostCSS does.

But anyway, turns out Heroku had switched to using version 16 of Node just without telling us. We were previously on 14, and then Node-sass didn't build on that. There was just this weird dependency chain that stopped working one day. And we weren't pinning the Node version within our application. So one of the developers figured this out, pinned us back to version 14 something of Node, and that was fine. But then my computer got confused because the versions were out of sync.

Anyway, asdf is great. That's the first thing I'm going to say. So I use asdf to manage the versions of Ruby, and Node, and Yarn, and Elm, and basically everything else that I use. And I love that it's all under one hood, so asdf, wonderful. Also, my laptop, wonderful. I really love the M1 fancy laptop. But what was fun was I had to install the new Node version.

And this was the first time in the three months I've had this computer that I've heard the fans come on. Finally, I asked it to do something hard enough that it was like, whoa, whoa, whoa, I'm going to need some backup here. And so the fans finally kicked in. So I don't know what's going on installing Node, but good for everyone involved, [laughter] impressive to make such aggressive use of all of the hardware in my computer.

STEPH: Yeah, I love asdf. I miss it right now because I'm on my client machine, and we're not using asdf. Instead, we are using Chruby, C-H Ruby to manage Ruby versions. asdf is awesome. That's fun. It's the first time that the fans kicked on. I'm intrigued with my machine. I haven't really paid attention to it when the fans kick on except the one time where I had like a Ruby process that was running away, and I had to figure out what was going on there. Because then all the CPU was just being dedicated to Ruby even when I wasn't using Ruby. But since then, I haven't heard the fans. It's been very, very quiet. It's lovely. I like when it's quiet.

CHRIS: Oh, it's been great. It was interesting because it was this weird noise that I'd forgotten about.

STEPH: [laughs]

CHRIS: My previous computer was so old that this was happening regularly whenever my backup process would run. Apparently, that is a very computationally intensive activity. So I would hear the fans kick in, immediately go find the backup process and say pause for 60 minutes or whatever it was. Just like, leave me alone. Stop it. The computer is getting too hot. You need to calm down. But now, with the new computer, there was nothing I could do to make it happen. And then finally it happened, and I was like, oh yeah, I guess this computer has fans. That's neat. But yeah, so things that are great, asdf and the M1 laptops.

STEPH: Nice. Yeah, you're one of the few individuals I know that's using one of the M1 chip. So it's been reassuring to hear how well it's going because I did not opt in to that new-new. I opted in to the give me something stable and steady that I know so that way I don't have to fuss with it because I can then fuss with all the other things that I need to fuss about.

CHRIS: So much fussing to do.

STEPH: Lots of fussing. Fussing and cussing is what I do over here.

CHRIS: [laughs]

Mid-roll ad

Hey, friends, let's take a quick break to hear from today's sponsor, New Relic. All right, so you've probably experienced this before where you're just starting to fall asleep, and it's a calm, code-free peaceful sleep, and then you're jolted awake by an emergency page. It's your night on call, and something is wrong. But I have some good news because you have New Relic, which means you can quickly run down the incident checklist and find that problem.

So let's see, our real user monitoring metrics look good. And that's where New Relic measures the speed and performance of your end-users as they navigate the site. But it looks like there's an error in application performance monitoring. If we click on the error, we can find the deployment marker where it all began, roll back the change, and, ooh, problem is solved. We can go back to bed, back to sleep, and back to happy.

That's the power of combining 16 different monitoring products into one platform. You can pinpoint issues down to the line of code so you know exactly why the problem happened and can resolve it quickly. That's why more than 14,000 other companies, including GitHub and Epic Games, use New Relic to improve their software.

So you know that next late-night call is just waiting to happen, so get New Relic before it does. And you can get access to the whole New Relic platform and 100 gigabytes of data free forever. No credit card required. Sign up at newrelic.com/bikeshed. That's New Relic N-E-W-R-E-L-I-C .com/bikeshed, newrelic.com/bikeshed.

CHRIS: Well, speaking of, what have you been fussing and cussing about this week, Steph?

STEPH: So this is more in the pranting area, which is our portmanteau for praise and rant, where I'm super excited. I have a delivery coming from Amazon today. So I'm that person that keeps checking and waiting for it to show up. But I'm finally going to have one dongle to rule them all.

I have a very messy approach right now [laughs] where I have all the dongles and have to plug everything in. And you know what? Normally it's fine. It's fine because I do it once, and I don't have to mess with it that much. But because I now have my thoughtbot laptop and I have a client laptop, and I needed to be able to switch back and forth, it is just too much. And I was realizing how many dongles I'm having to use. So I have one dongle to rule them all. It's showing up today. It's a very exciting day.

CHRIS: I'm very excited for you. I recently made a similar switch when I got this new laptop. I was like, you know what? I'm going to look into it because power can come over USB-C and whatnot. And I was like, all right, it's finally time. I want to be able to just click in. And it's one of those things that feels trivial, or at least in my mind, I'm like, this doesn't feel like it'll make that big of a difference.

But it makes it so much easier to disconnect my laptop, go somewhere else, and then come back. And I noticed myself doing that more, which I think is a positive thing. Otherwise, I'm just anchored to my desk. I'm like, I don't want to unplug everything and then have to replug it. That's like a whole thing. But now that it's not, I am more mobile, more flexible in where I'm working from, and I found benefits from that. So I'm a fan. I'm very happy that this is going to show up for you [laughs] and really change the way you're working.

STEPH: Well, I've started moving away from a lot of Bluetooth connections as well because my keyboard will support Bluetooth, my headphones support Bluetooth. And I liked the idea of being wireless. But then, especially from switching laptops back and forth and then having to reconnect and all of it, it was just too tedious to go back and forth.

And yeah, I'm with you where I didn't want to have to leave my desk and unplug everything and then bring it back where I'm playing, you know, like the game Operation where you had to reach in and then you had to grab different little bones? If you don't know the game Operation, that sounds really weird. But it felt like a game of Operation where then I was having to find all the dongles and connect them and plug them all in. And yeah, so it's going to be wonderful.

CHRIS: Even knowing the game Operation, that still sounds kind of weird.

STEPH: [laughs]

CHRIS: But I really love that there are people out there listening that are like, what are they talking about?

STEPH: What weird childhood did you have?

CHRIS: Yeah, I'm definitely Team Wired-Almost-Everything. The only thing that I have that's wireless is my headphones. And it only works kind of, and I have to trick them sometimes. And the worst thing is occasionally my computer will have control, whatever, they're connected. So I'm listening to music on my computer and then suddenly, my phone will just steal it. It's like, what are you doing? No.

Or, randomly, my headphones will be sitting away from me, and they'll just connect. And I'll be in the middle of a call on something else. Like, I'm here talking to you, and suddenly my headphones are like, hey, we wanted to join the party. It's like no, absolutely not, [laughs] not at this moment, under no circumstances. So I don't really believe in Bluetooth as a technology. I'm very much a fan, particularly with things like keyboards and whatnot. Bluetooth I've yet to be convinced that it is a sound technology.

STEPH: I have the headphones where they try to be very smart, and they are pretty smart where they will block out sound. But then, if I am talking, then it will put me in more of an auditory space where then I can more easily hear, and it won't filter out sound as aggressively. But I've noticed a problem. And it's when I'm watching anything that's funny that then I'm laughing.

So every time I laugh, my headphones think I'm talking to someone, and then it will switch over to where it's trying to let me hear more sounds out in the universe. And then it kicks back on because it's like, okay, she's done talking. It's a very jarring experience. [laughs] And I haven't figured out how to turn that setting off. It's like, oh, I just can't watch funny stuff with my headphones right now, which is also problematic with pairing because I tend to laugh a lot with pairing. It's a thing. I'm working on it. The struggles of Shteph.

CHRIS: Well, at a minimum, it sounds like your dongle life is going to be improving very soon, and that's exciting.

STEPH: Dongle life, it'll be single dongle life. That's it. [singing] All the single dongles, all the single dongles. Put your adapters up. [laughter] On a different note, talking about some of the work that I've been doing this past week with Joël Quenneville on our client work, is that we have been looking for ways that we still want to build up CI time. We’ve talked about the fact that we're working on some of that horizontal scaling. And I don't have an update there.

But the other update I have is where we want to be very strategic about where we invest our time because improving the test is not trivial work. A lot of the low-hanging fruit has already been done, so triaging a flaky test can be very difficult, and it can take us a while. So we just want to make sure and verify that before we invest a lot of time into a portion of the test that then we know what the outcome is going to be. Are we improving developers' lives by this much? And how do we measure that? Are we reducing the CI build time, and how do we know that?

And one of the areas that I really wanted to focus on is FactoryBot because there are a lot of factories. The factories tend to do a ton. So they are calling out to the database and building a lot of associations. And that's something that the team knows about as well is that there are just so many SQL queries that get executed in tests. And it would be great if we could reduce the number of SQL queries that are going out.

And FactoryBot includes some ActiveSupport notifications, which means you can subscribe to factories being run which then gives you access to details like which factories are being used? What build strategy is used? Are you calling build build_stubbed or create? And the factory’s execution time.

So then the idea of this is that if we can harness a lot of the data that we can collect from FactoryBot, then can we ask questions around what's our slowest factory? How long does it take, and how many places is it being called? Because then ideally, we can calculate to say, okay, if this factory takes this long and it's used in this number of places, then we can have a formula to figure out how many minutes of our test suite is spent just on executing that factory.

And then if we can reduce the time of that factory, let's say by half, then we know how much time we're shaving off of our CI build. And then we have this more concrete verified okay; this is worth our investment. We want to pursue this, even if the factory may take us a full day to improve because it does so much. And it's just gnarly. So it's going to take some time to really refactor it into a more simplified state. So, in theory, this sounds really, really great, and it was a lot of thanks to Josh Clayton, who was the one that advocated saying that we could use the ActiveSupport notifications to find a lot of this data.

And so Josh and I paired on this for a bit to look into can we answer some of those other questions as well? And we were testing it on a small side project that he had, which was great because the other codebase is very big, and feedback is just a lot slower. So we wanted to first prototype it and have a proof of concept in a very quick space and just to be able to look through the data and make sure the assumptions that we had and the value would be there. So we applied that first, and that was going really well.

And then Joël Quenneville took that strategy and then applied it to all the specs in the spec models directory and ran it for the much larger client codebase and got some really great results. And we also used a low fidelity approach where we wanted to be able to see which factories were the most popular. So how often are they getting called? And the average execution time. So that way, we could then quickly look at this scatter plot, and then we could see, okay, who's in the far upper right quadrant? Because those are the factories that are causing the most pain.

But we started looking into a graphing library and what are we going to pull in. And Josh had the great idea. He's like, "I wonder if Google Sheets has a scatter plot. Can we export this to CSV data and then copy it from the terminal and import it into Google Sheets?" And it turns out that you can. So then we grabbed it and put it in Google Sheets and then just converted it into a scatter plot, which was really nice because then we didn't have to incorporate any chart library or any graphics or anything. We could just plop it into Google Sheets and then easily share it.

So we now have this list thanks to Joël because he ran it through the spec models directory of all the factories that are getting called. And it's really interesting. And there's one, in particular, that is high on the list. And it was actually one of the first ones that we worked with when we were troubleshooting a test that took us a while when we first joined the project. And the average time for this factory is four seconds, and it gets called over 500 times. It's like 527 times. So then if we multiply that, so if we say, all right, it takes about 4 seconds times 527 and then divide it for 60 for minutes, that's 35 minutes, 35 minutes for that factory.

Now, granted, these are getting parallelized across different processes. But still, if you divide that up across four processes, that's a non-trivial amount of time. So I think this is going to be really helpful and really interesting data that we can then use to drive our decisions to say, okay, we want to take this factory and let's say even if we can cut it into half, let's say if we bring it from 4 seconds to 2 seconds, it'll go from 35 minutes to 17 and a half.

CHRIS: Oh wow, I love the methodical approach. I love that actually having a number you're like; this is how much pain or the cost of this right now. And so we've identified that this is this high-level thing. I love the intentional starting with, like, let's measure it. Let's understand where the most bang for the buck is.

In particular, the graph that you're describing reminds me of one I haven't actually worked with it much. But Code Climate has a graph that they use, which is it's churn versus complexity. So it's like, you may have a very complex piece of your code, but someone wrote it once, and it just sits in the corner. And you know what? It quietly does its job. And yes, it's very complex, but nobody needs to touch it. So it's not a big deal.

And then you have stuff that changes constantly, but it's super simple, so that's fine too. Your UsersController is probably going to change a bunch; that's fine. But the stuff that is constantly changing and very complex that's the magic quadrant that, like, pay a lot of attention to that. And similarly, which are the ones that are being used a lot and take a while? That's the magic quadrant. I'm intrigued now. I want to search for more magic quadrants that deserve attention. But for now, that sounds like a lot of fun.

So now, what's the approach that you're going to take? I imagine you need to alias that factory and have it exist because some tests will rely on certain details of it. This is my guess. So let me see if this is the way that you're thinking about it, alias the factory, so you have a representation that does all the stuff that the current one does. But then you have a new one that is much more pared down.

And then, on a test by test basis, you start switching it over and trying to move things to the lower weight, the slimmer version of the factory. But I would think you would want to do a gradual process if there are 520-ish usages. Because otherwise, just changing that factory out from under all the tests, I imagine you'd break some tests if you just were like, what if it did less?

STEPH: Yeah, I like that idea of the incremental approach. And that all sounds great, especially the alias because you're right; we want to change it incrementally and not all of them at once. But then essentially implement, one, because I want to see what does the pared-down factory look like? What is the basic factory that we can get away with? And then how long does it take for that factory to execute? Because then that will help confirm, can we really get it down to two seconds? Or is this just a factory that's always going to take three and a half seconds, and then it's not really that much of a payoff? Maybe we should look for a different factory to investigate.

And then also understanding from the test are people reaching for this factory all the time because it builds up the world and all of these tests need the full world? Or are people just reaching for this because it does the one or two things that they need, and we can get away with a much slimmer factory? So right now, it's in the space of understanding why are people reaching for this? What are the tests they actually need? And yeah, how can we do it incrementally?

At one point, we may even be able to try to programmatically switch it out. Maybe we just find 50 tests that are using this once we have the slimmed-down version and we replace...50 is probably too big. But if we replace X number of tests with this factory, how many of them fail? Maybe 10% of them passed. Cool, let's just take those 10% as a win and issue those as a PR. So that could be a strategy as well just to find if there's any that are super easy to change. All we had to do is literally change the name of the factory.

The other big part that's on my mind is education around factories. I think a lot of people on the team understand that factories are very important. They can be very helpful. They can also be very cumbersome. But it feels like a good opportunity to say, "Hey, we are specifically working on these factories. Here's the reasoning that led us to work on these factories.

When you're in the space of factories, please be mindful about what are you reaching for? Is there a slimmed-down factory that you can reach for? Maybe you can implement your own slimmed-down factory if one doesn't already exist." So I like the idea of coupling it with also just broader awareness because we are but two people. So I would love for more people to be part of the changes.

CHRIS: Unsurprisingly, there are some wonderful blog posts on the thoughtbot blog that speak to this topic. One that I'm a fan of is Factories Should be the Bare Minimum. This was written by Matt Sumner. And it describes basically that idea of factories shouldn't build the worlds. They should give you the pieces that you can use to build the world but not build the world entirely. And so I'm a big believer in that, having your factories be as minimal as possible. They should be valid, but that's about it.

And then I will often reach for extracted helper methods and keeping those as locally scoped as possible often in the spec file, or if not, maybe they're sharing spec support. But being intentional with where we reach for them and not having everyone use the same thing that just slowly gets added to. And it's like, do I actually need everything that's in there?

The other thing that's interesting is the idea of having a factory that does a ton is, in my mind, sort of in direct contradiction to what I believe factories exist for which is when I think of factories, they're useful to fill in the rest of the details such that you don't have mystery guests in your test.

But you can explicitly say build me a user who has an email that looks like this because, in this test, I care about the email, but I don't care about the rest of the details. I don't care about their name. I don't care about their password, or the roles, or any of the other details. Just let the factory deal with that because it's not important to the test. But I want to make sure that the relevant detail is present and specified within the spec.

If you have a factory that builds everything in the world, that's like build a user and then grabs the first action from the project that that user has, because I know that they do because they use the big factory, that is just in direct contradiction to what we want factories to do. We want tests to tell their story. We want to avoid mystery guests. Factories are a great way to do that while still remaining concise. But if your factories just build the world, then there are some mystery guests in the world, I can assure you.

STEPH: Yeah, I agree where factories have served as an abstraction for what I think is important to the test. But then there becomes this moment of where someone thinks, well, I need to build up these records, but I don't really need to reference them directly. I just have some coupled code that's going to rely on these. And so I don't explicitly need them, but they need to be there. So I'm going to extract it away, and a factory feels like a good place for me to extract that too.

And I would take the very hard opposite approach where if you have coupled code and you have these dependencies that aren't necessarily explicitly used in the test, but they are required for the test, I'd rather see a painful test setup than have all of that extracted away from me. Because then if I do need to triage or troubleshoot that test, it's going to take a lot of just mental overhead to work through what do I actually need here and why? So I'd rather see that painful test set up then have it moved somewhere else.

But I think a lot of people take the opposite approach of where if I abstracted away, my test looks prettier. And I'm like, yeah, but maybe to you in the moment, but it's going to cause me a lot of pain further down the road when I have to work with this. So show me all the crap that you had to do upfront. Just let me know. [laughs] I'd rather the test be honest with me.

And then it's a really nice jumping-off point because you can see a test that does all of this. And instead of blaming the test and thinking it's the test's fault, you recognize this test has a lot of complicated setup, and it's probably because of the code and how the code was written. And we should look at refactoring the code, not at how can we make our tests look prettier?

CHRIS: Unsurprisingly, I agree with 100% of that. Someday we'll find things other than Pop-Tarts and IPAs that we disagree on. But today is not that day. [laughs] Once again, today is not that day.

Mid-roll Ad

Hi, friends. And now a quick break to hear from today's sponsor, Scout APM.

Scout APM is an application performance monitoring tool that's designed to help developers find and fix performance issues quickly. With an intuitive interface, Scout will tie bottlenecks to source code, so you can quickly pinpoint and resolve those performance abnormalities like N+1 queries, slow database queries, and memory bloat.

Scout also recently implemented external service monitoring, adding even more granularity when it comes to HTTP requests and API calls. So give Scout a try today with a free 14-day trial and experience first-hand why developers worldwide call Scout their best friend.

And as an added bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. To learn more, visit scoutapm.com/bikeshed. That's scoutapm.com/bikeshed.

STEPH: Well, here's one more that maybe you'll agree with, maybe you won't. We'll see. I'll try not to lead you in either direction, but shared examples. If I'm going to rant for a little bit, shared examples are in that space of where they just get used so heavily, and they abstract away important information about the test. And it makes the test so succinct that I don't actually know what the test is doing.

And I've seen a number of places where a shared example has been extracted, and it is only used within that test file once, maybe twice. And I'm just like, friends, too much abstraction. Please keep it close. [laughs] We don't need to move it away. We want our test to be friendly and just full of context, which is what I mean when I say friendly. I want full of context is what I'm looking for, well-named variables. And I won't be able to read the test and see what's happening.

So my little complaint for today would also be about shared examples and how they can be overused. And they do have a really neat purpose. They can be helpful for if you're testing maybe a controller action and you want to say you're extracting that authentication, making sure that a controller always has authentication and then that is getting included. Sure, that feels very helpful. But that's really one of the few cases I can think of where a shared example comes into play.

And if you are testing code over and over throughout different parts of your codebase, there's probably a part of your codebase there that needs to then be pulled out into a class and test that class in isolation. And then you don't need to retest it throughout all of your other classes. Have I already ranted about shared examples? I can't recall at this point if I have or not. [laughs]

CHRIS: I don't think we have talked about shared examples before. And I appreciate you not leading the witness here. But I think I'm in agreement with you, particularly the way you refined it there at the end because that controller example is the one rare case where I might reach for it. But in general, I think this is one of those things that I saw early on in my career. I was like, oh, cool; this is a way to clean stuff up and DRY and all those wonderful things. And then I've definitely felt the pain of just overuse of shared examples and ways to pull details out of tests. But it's like, I want to see the details.

And I think broadly, that's the theme that you and I are very aligned on is like, no, no, no, tell me the story in tests. I am much less interested in having these concise tests that have a single line, and it's like, expect foo to have bar. And it's like, why? Because...oh, there's a let and then a subject, and it's a shared...oh okay. Now that I can put it together, I can tell the story, but I cannot look at this test and see a story. I want to see a story, friends. So yes, I'm totally in alignment, especially with the slight caveat at the end of like, there are cases where it's useful.

Similarly, I've used let. I definitely have not even that long ago. And I stand by the usage, but it was very rare. It's very rare, and it is something that I'll look at and be like, am I sure? Definitely, is this the right thing, or did I do something wrong? Because if I find myself leaning towards let, it's like there's something that I don't think is important to the story of this test that still needs to happen. Why is that? What's going on here? Something feels off about that. And similarly, with the shared examples, it's like, is there not a different way to extract this such that I can test it in a way that I have confidence in, and then we're good?

I occasionally will talk myself into using shared examples or something like it where I'm like, oh, but it's really important that everything in the app has that authentication layer put in. And so, I should definitely have this very easily reusable test that can ensure that I have it. But there's a tautology there of well, if I write the test, then I'm definitely thinking about the implementation. But if I forget the implementation, I might also forget the test. And so, it actually doesn't provide any real safety.

And in those cases, that's a rare case where I would reach for some weird metaprogramming thing that's like, controllers must do the thing. And we say that in application control and then everything inherited from that will raise if it doesn't implement the authentication layer. Something like weird code that says, "You shall not pass. You must, in fact, implement the authentication layer." Rather than saying, "Oh, we'll just make it really easy to test it so that we always test and, therefore, always use the necessary authentication layer." But yeah, that's a hard one to describe in the radio. So I don't know if that came through clearly. But that's sort of my headspace on this.

STEPH: Yeah, and all of that makes sense. I'm trying to think of a good example. And it's been a while since I've used Pundit, but I feel like Pundit may have a really good example of this where it's very easy to document to say, hey, all of these controllers need to make sure that they call out to this class or that there's authentication. I can't remember the exact code and how that works. But I feel like Pundit has a really good example of that behavior.

CHRIS: I think they do. It's something where I think it's a configuration level thing, but you say, "Hey, Pundit, we should definitely authorize any access to models." And so Pundit then has a before action, or it's an around filter one of those. But it will raise an unauthorized error, I want to say. Like, you did not do the authorization dance in this. And that's a great example of like, I like that it is loud and annoying and in your face.

And it is not possible for me to forget it because we configured it throughout all controllers. And so it's that sort of thing that I would probably reach for even though that code gets complicated and messy, and actions at a distance. But it's worth that trade-off in my mind to have, like, I don't want to forget to do the authorization stuff. Permissions matter.

STEPH: That was a really nice pre-emptive approach as well. Because in most cases that we're describing, it's the I'm going to write a controller, and then I need to add this test to verify and prove that yes, I didn't forget the authentication stuff versus upfront, you're setting in a configuration to say, "Hey, please remind me to do the configuration or the authentication step that I don't miss this." So that's also a really, really nice approach.

CHRIS: Yeah, the same version of me that's going to forget to write the test is going to forget to write the implementation. So I don't want to trust that version of me to save that version of me. I'm equally untrustworthy in those situations.

STEPH: You want to trust the version of you that's going to get yelled at by the code if you don't do it.

CHRIS: Yep, I'm going to trust the version of me that was like, I don't trust any future version of me. I will yell at myself if I have not done the necessary things.

STEPH: [laughs]

CHRIS: To be clear, this is like a life philosophy of mine. I don't try to remember things because I forget stuff a lot. It just happens. And so if I need to take something out the door with me, it goes in front of the door but extra critically, and this is the subtle line. Because plenty of people do that trick where you put a thing in front of the door because then you can't leave without it. There's no way to forget it. But by virtue of that, you cannot put something in front of the door until it is time to use it.

Like, if ever you have to go and be like, oh, I don't need it now, though, so I'm going to move it out of the way, open the door, and then leave. No, no, no, because then you've broken the magic of the thing in front of the door must leave with you. So it's a very subtle line. I will play games with myself. I'll be like, I am forgetful. I will not remember this. I do not trust future me, so I'm going to play a trick on them. But you got to calibrate it just right.

STEPH: That's really funny because I totally [laughs] didn't think about it until now how you described it. But I have definitely done that where I set a rule for myself, but then I'll break it. And then, of course, everything all of it collapses. There is a time when Tim, my husband, was going through a developer bootcamp. And as he was learning the whole world and everything that's out there, he would ask me all these questions. And he's like, "Do you know this?" And I'd be like, "No." He's like, "Do you know that?" I was like, "No."

He was like, "I thought you knew this stuff." He's like, "I thought this was your job." And I was like, "Yeah, I'm really good at finding it and Googling it. But I work really hard to not store this in short-term memory because I'm filling it up with other stuff. So I work really hard to be able to find this stuff and track it and Google it."

But now, there's a lot of stuff that I try very hard to not hold on to until I need it. But that was a funny moment where he seemed very upset that I didn't know stuff. And I was like, "Well, welcome to web development. There is too much to know. You're going to have to have a really good catalog system."

CHRIS: Also, just so we're clear, it's going to change by next Thursday, so don't hang on to anything like it's just true forever.

STEPH: [laughs]

CHRIS: SQL will probably be around. That's about it. That's the one thing that I'm really confident in.

STEPH: Yeah, that feels fair. Get really good at understanding HTTP forms, SQL, all that feels like some really good groundwork.

CHRIS: There are some foundations. We should have a foundations episode where we talk about what we think the foundations are, the stuff that we bet won't be different in 10 years. But everything else is going to change by next Thursday, specifically.

STEPH: Yeah, I like the idea of foundations. I'd be intrigued to see what we talk about and what happens there because I feel like that's going to be very representative of already what we talk about. We often will sprinkle in some new-new, especially thanks to a lot of the adventures that you go on. But I feel like a lot of the stuff that we talk about we always bring it back to the foundation because we do want the experiences that we're having to be applicable to everyone else as well. So yeah, that would be interesting to see what comes out of that.

On that note, shall we wrap up?

CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm.

STEPH: This show is produced and edited by Mandy Moore.

CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.

STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.

CHRIS: And I'm @christoomey.

STEPH: Or you can reach us at hosts@bikeshed.fm via email.

CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.

ALL: Byeeeeeeeee!!!

ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.

Support The Bike Shed