Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

My Information

  • Agent Count

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. We're working on moving from Ignite-driven patching to the new Patch Manager in LT11, and I'm trying to figure out what determines when patches actually start installing once the patch window begins. We have a few Windows VMs (Server 2k8R2, 2k12R2, 2k16, and Windows 10) setup and while our Server 2012 R2 VM began patching immediately (evidenced by TiWorker.exe sitting at the top of the process list sorted by CPU utilization), the others started sometime after I left the office for the day and as I've been monitoring them today, I havent observed the process start yet. Should I be tighte
  2. Well, that's done. New patching looks...wow. Very new-school. Now that we're on new patching, is there a recommended way to do this? I've re-checked the Report Center reports and the Patch Compliance report looks WAAAYYY more accurate, so that's good, but I definitely want to know of any better ways to get the data if they exist. If there are none then I'll focus in on creating the best filterset for the Patch Compliance report. Thanks for the help!
  3. I've been tasked with generating a custom alert board that we can reference for a large-format display in our NOC. One of the things we're dealing with is a large number of "No checkin for (x) days" alerts and we want to exclude those from the list, along with some other alert types that are deemed less urgent by our account team. Is it possible to create a custom alert view and use filters to exclude certain alert types, monitors or strings? We would still want to keep the master list of alerts that shows everything, but we want to have one (or more) that are custom-tailored to sho
  4. Is there any way to filter the missing patches dataview so that an agent will only show up once in the list if it has any missing patches at all, rather than multiple times (once for each missing patch)?
  5. Management is looking for a report, to be run after Patch Tuesday each month, for which endpoints are behind/missing/noncompliant on Windows updates. I've looked in Patch Manager but it seems that it only allows me to view by patch rather than by computer, and the report I have in Report Center just shows "All devices are compliant" for each client when I know that's not the case. We're on 11.0.387 (using the legacy UI) and haven't upgraded to new patching yet. I've tried importing this report into Report Designer, but running it (even unfiltered) still shows "All devices are compliant"
  6. I've noticed lately that a lot of the time when I run commands against Mac OS X endpoints, the shell returns "OK" rather than the expected result of the command. For example: ls /Applications OK This happens when running commands interactively as well as when the commands are executed by a script - I enable script engine logging (and even explicitly log %shellresult% to the console) and the same thing happens: a long list of comments on what the command is doing, with "OK"s between them. Does anyone know why this is happening and/or how to fix it? Some Mac endpoints respon
  7. Thanks! I logged in to the SC portal and have been comparing hosts there with hosts in Labtech, redeploying and/or updating a spreadsheet (woo) as I go.
  8. Does anyone know of a way to test whether a (specifically Mac OS X) endpoint is accessible via ScreenConnect without actually launching a session and potentially incurring a confused/irate call from a client about why we were randomly taking over their system? I'm fighting with an issue where multiple Mac OS endpoints show ScreenConnect as installed but will only display a black screen (or the "your partner has not connected yet" error) when our techs try to remote in. I've attacked this in a couple of ways, including a Labtech search for installed software and a script that queries runn
  9. EDIT: Looked into the script a bit and it looks like the conflict-detection logic occurs prior to checking for the @ForceEsetInstall@ flag. So it wouldn't have happened anyway. Also, I tested on a different host and the script log did indeed show the "Attempting FORCED installation" status message, so it looks like this feature is working as advertised. OK, took some screenshots of the scripts I'm running and the params I'm trying to pass. Am I handling these the right way, by using "Variable Set" and entering the variable name without the @signs@ in the Variable Name field? O
  10. Will try that tomorrow. Does this apply to parameters defined in the Parameters box (top right corner of script editor) as well? The ForceEsetInstall param is one that appears in the script-run dialog when executing the script on an agent.
  11. I've been putting together an automated-deployment script for new client workstations and am running into a minor snag - we have another script that installs our managed ESET package which I'm calling via the "Script Run" function in my master script. However, the ESET script has conflict-detection built in and is falsely alerting on Windows Defender as a competing/conflicting AV package. The ESET script has a parameter @ForceEsetInstall@ to disable conflict detection but I can't figure out how to set it from inside my master script. I've tried preemptively setting @ForceEsetInstall@ =
  12. I'm working on getting some out-of-date Mac agents updated and I'm trying to wrap my head around how the installer works. I can extract the location-specific .zip downloaded from our Labtech server and get the .mpkg and config.sh files, but I'm not sure how or where the config.sh script gets called. I see that it contains our server address and password and the client location ID, but when I extract the .mpkg I can't find any reference to the config.sh script. Best I can tell it doesn't actually get called - it just sits in the directory alongside the .mpkg. Makes no sense, though, as it H
  13. I'm working on a script to deploy some TTF font files to a client but am running up against a wall at the last stage. My script is successfully able to download a 7zip archive containing the TTF files and the FontReg.exe utility to install them, along with the 7z binary and library, and decompress the archive. However, when the time comes to run "C:\LTTemp\FontReg.exe /copy" to deploy the fonts, the script either hangs until a reboot or reports success with no actual results. Running FontReg.exe /copy from the shell manually on the endpoint brings up a UAC prompt. I'm not sure which she
  14. We have one working example that looks for a 301 Moved Permanently and tests successfully. The monitor is configured for just "domain.com" rather than "http(s)://www.domain.com" and a browser will automatically redirect to the latter when pointed at the former. Of our 3 SSL sites, when testing this method, one reports back "302 Found" and two still come back with a "400 Bad Request".
  15. I'm working on updating some site monitors and have run into a snag with a few sites using SSL. Tests keep returning "400 bad request" and we have an internal document noting that a former manager needed to get involved for configuring SSL monitors. Problem is that former manager is, well, former and I don't know of anyone else in-house who's done it. Anybody here know a fix for getting a site monitor (TCP Network Check) to successfully test an SSL site for responsiveness?
  • Create New...