Posts by apullin

    Do the Segger software tools support IPv6 addresses for connecting to remote hosts? Are there address format constraints? e.g. leading zeros, double colons, etc.

    I can't see any mention of ipv6 in the documentation, and it is not currently working for me, but I am not entirely sure if it is a segger tool issue, or a separate network arch issue.

    Is is possible to use the plugin or JS scripting engine to send data into the "Timeline" window?

    The application is for real-time plotting of a signal.
    In this application, the values of the signal have to go through a hardware FIFO, so they only exist in bursts in an array.
    Although, in this case, each sample *is* timestamped.

    So, if I could highjack the drawing in Timeline,
    and trigger updates on specific PC matches (or some other trigger mechanism), that would work great.
    It would just be a little bit of code to draw the `(t,y)` points, rather than trying to observe `y` as it rapidly changes withing the burst.

    Can RTT be made to work on an ARM1176 target? Yes, I know it is now ancient silicon. After some tinkering, there seems to be some suggestions that it might be impossible due to no "background memory access"?

    The control block is definitely linked and sitting at 0x00012940, but there's no RTT output on the terminal,

    And the Ozone console surfaces that BMA error:
    ```
    1:04:15.579 090 Edit.SysVar (VAR_ALLOW_BMA_EMULATION, 1);
    1:04:17.144 014 System settings were written to the project file.
    1:04:22.302 502 Project.SetRTT (1);
    1:04:22.302 598 RTT has been activated, however the hardware does not support background memory access or BMA emulation is not permitted (see system variable VAR_ALLOW_BMA_EMULATION)
    1:04:22.302 611 RTT active
    1:04:22.339 392 Error (78): Failed to activate RTT .
    1:04:22.339 421 RTT inactive
    ```

    @Haddock Also look into RTT. It is a fairly good tech, and it gives you a data in/out pipe. I have used that before to set up a system where a running device listens for "commands" over the input stream, does certain manufacturing/comissioning-time actions, and then locks out any more commands.

    Also look into pylink-square. They wrapped the Segger DLL in a python library, making it significantly easier to do a lot of the manipulations.

    afaik the way the pro's do the "production date" thing is that they either modify the ELF file, with a ELF library, or use srecord to jam a binary into an existing hex file.


    It's a little .............................. unfortunate that Segger does not support some of these tech stacks a little more.
    There is a pretty clear need for a generalized RPC facility. All the tooling is in place for it. If I was less lazy, I'd maybe write it.
    It is shocking how often a whole on-device CLI needs to be put into place to facilitate this need.
    Then everything from "set flashing datetime" to "inject serial number" and such could be done with on-device functions, committed & unit tested code, instead of raw memory edits, compile-time tricks, etc.

    This may be an STM32-specific thing, and not related to Segger BUT:

    I enabled read protection on a STM32L496AG part.

    When I connect with:

    Code
    JLinkExe -device STM32L496ag -if swd -speed auto


    And then do "halt",
    sure enough, I am presented with the dialog about "Active read protected STM32 device detected."

    But if I select "no", and DON'T trigger a mass erase, it drops to the "J-Link>" prompt,
    and I can then do:

    Code
    mem32 0x08000000,32

    and it reads out the flash memory. SP, vector table, etc.

    Which is ... surprising? ?(

    I need to come up with a solution for setting an external RTC on a board w/ an STM32L4 line mcu.

    It occurs to me that I could potentially do this via an OpenFlashLoader tool, where I could create a few memory-mapped variables, e.g. 32b epoch time to set, 32b sentinel to start setting, 32b response value to pend on.

    Can multiple separate OpenFlashLoader ELF's be used for different address spaces on a chip?
    Or would I have to extend the "main" one that ships for use with QSPI flash?

    And if I have to extend the central one, is the source for that available? I am using an STM32L496AG, so I am not even sure which OFL ELF is being used under the hood. (but QSPI loading works!)

    Yes, I could probably do the same thing with an all-RAM standalone program, but then I'd have to bring up a separate project, build system, etc.
    But having a general chassis for memory mapping access to board comms busses via jlink+OFL seems like a pretty useful too to have.

    As an aside: is there a known solution for setting the on-chip RTC via JLink only?
    This would nominally solve my need, as long as I refactor everything downstream to propagate the known-good sense of time in any/all firmware loaded onto the board.

    I am also consistently having problems with recent versions of Ozone on OSX.
    I just tried 3.24b, and these issues persisted. I believe they showed up all through the 3.23 series, so I have been using 3.22a.

    Behaviors:
    - Dragging the "FreeRTOS" window from being docked to undocked will usually result in an app hang or immediate app crash
    - Dragging other windows around sometimes does not cause a crash, but resizes the app window and results in large blank gray spaces in the layout
    - Font looks odd; might be an intentional design change, rather than a bug

    The top of the stack trace looks like this:

    Hi. I am using JLink on OSX, targeting an STM32L496AG micro.

    When connecting via command link JLinkExe, it seems like the RTT control block is not automatically found.

    Invocation is:
    JLinkExe -device STM32L496AG -if swd -speed auto -autoconnect 1

    While JLinkExe connects and controls the core OK without issue, it does not seem to make RTT output available to the JLinkRTTClient running in another shell.
    Is this expected behavior?

    However, when using Ozone for debugging, the RTT block is detected automatically upon connection, and all output appears through JLinkRTTClient.

    After some tinkering, the one workaround I found was to force a manual search, by putting this into an `rtt.jlink` file:
    hrexec SetRTTSearchRanges 0x20000000 0x50000

    And running:
    JLinkExe -device STM32L496AG -if swd -speed auto -autoconnect 1 -CommandFile rtt.jlink

    In this case, it does appear that RTT output is picked up and visible in the JLinkRTTClient.

    A minor note is that I see the RAM size listed for the STM32L496AG MCU in the Segger list is 256KB, whereas this part actually has 320KB of SRAM. Possibly an incorrect search is being done? Although I am not sure why that would work in Ozone and not CLI.

    Is there a patch to add SystemView that is avaialable for the FreeRTOS V10.0.1 kernel?

    There are a few places that are different, and different enough that it is not immediately obvious how to manually patch in the changes.

    e.g. the changes in `prvAddNewTaskToReadyList`, which appears to have changed significantly between V10.0.0 and V10.0.1.

    Is there any existing tool that could be applied to a hardware-in-the-loop QA system to capture information like the stack trace heading into the hardfault? And ideally, the equivalent of the the full output of the FreeRTOS view from Ozone. Beyond that, maybe even the stack unwinding for each of the tasks, too.

    The application here is that we have a QA agent doing very labor intensive testing, and in the case of hardfaults, it would be great to grab all that information at the time of failure, where presently we only have the serial terminal log.

    At a glance, it looks like SystemView might cover some of this need, by the ability to continuously log data.
    But will it also give emit and log the call stack trace (if possible) at the moment of the hardfault? And the state of all the other tasks?

    The ideal format for this tool would be a daemon that runs in the background, uses an existing JLink connection, and would capture and dump that information to a file based on certain conditions, maybe a specific number breakpoint or by checking a fault is active at the time of hitting a breakpoint.

    I am looking into what can be mechanized via gdb ... but I almost never use command line GDB, since Ozone works well for my normally debugging.

    It also occurs to me that it might be possible to do this via Ozone by writing a plugin that uses the Javascript engine, too?

    I am working with a board that is copied partly from the STM32L496G_Disco board as a reference, using that exact micro and SPI flash.

    The good news is that the bundled OFL loader for this MCU + flash combo works "out of the box" with JLink and Ozone. Wow.

    The bad news is that it is a bit slow to program even a modest size hex, ~1.7MB takes 30+ seconds.

    What speed(s) are set up for the MCU and the QSPI clock in the ST_STM32L496G_Disco_QSPI.elf OFL that is bundled in the JLink package?

    Is the source for that OFL project available, in case I want to change it & change the MCU clock to maximize programming speed? Or maybe implement some of the cleverer BlankCheck and Verify functions if those are not in there already.

    Will the output from the Ozone code profile listing for the "load" metric be valid for functions that are moved to a run-from-RAM section via a gcc attribute?
    e.g. __attribute__ ((section (".fast"))) , with the .fast section located in the RAM segment by the linker script .

    I don't have a minimum working example to reproduce at the moment, I am just observing this in the context of a large/complex program where I am trying to accelerate some crypto functions.
    When I move some of the functions that were reading a high "Load" percentage via code profile into RAM, I am still seeing the "Run count" increase, but the "Load" goes from ~7% to 0.01%.

    Since that is a surprisingly large difference for a relatively simple function (128B, mbedtls_mpi_cmp_mpi from the mbed TLS library), I have some concern that the code profiler is getting confused. But I do see mbedtls_mpi_cmp_mpi and _mbedtls_mpi_cmp_mpi_veneer both detected, both listed with accurate addresses in RAM & flash respectively, and the run counts increasing together.
    So I am unsure ...

    Alright, for a little more exposition, the error can be captured in a simple printf of the address before and after the function call.
    e.g.:

    C
    printf("addr = %p\n", &myvar);

    Looking at the disassembly, this compiles to:

    Code
    0801 8A78  ADD.W        R3, R7, #0x10
    0801 8A7C  MOV          R1, R3
    0801 8A7E  LDR          R0, =0x080B0070   ;[0x08018BBC]
    0801 8A80  BL           printf            ; 0x080A4640

    Where 0x080B0070 is the address of the const string.

    So it looks like R7+0x10 = 0x2001C090 is storing the address of the variable.

    In the suspicious function, R7 is getting stacked, but then the incorrect value in unstacked, ergo my suspicious function must be clobbering my stack frame (or the RTOS is corrupting something).
    Upon function return, R7 is reloaded with the corrupted value, so R7+0x10 is now pointing to the wrong place.

    I guess this comes down to how the debugger has to try and make a fake "symbol" or expression to represent the local stack variable, since it is not a proper object symbol after compilation. So it is really only an inferred location, and I suppose it can only ever infer it when loading the ELF.

    (Also ... couldn't the Jlink itself enforce a check that detects or instruments a C function entry and checks for C stack corruption on return? That would be pretty great ... )

    I can't profile the elf, since it would contain our proprietary source. Maybe if we put an NDA in place? But then you would need very specific hardware for the ELF to even run.

    This issue also happens when using gdbserver + eclipse + GNU Arm Eclipse plugin suite; the behavior is reproduced in the Eclipse watch window.

    How is Ozone/Jlink even resolving the variable name to a location? There isn't a symbol in the ELF for it, as far as I can see ...

    I am debugging a project with Ozone, and I am seeing something that I do not understand the cause of.
    It may have a clear cause ... or it may be an Ozone bug. I'm unsure.

    In the code that I debugging, I am setting a watch on a stack variable (a struct), and Ozone reports the location as 0x2001C090. The size is correctly reported as 121 bytes.
    After a returning from the call to a function that is suspected to contain bugs (by step-over before entering), the location reported by Ozone changes to 0x20000010. The size is still shown as 121B.

    The MSP and PSP appear to not be affected by the suspect function when stepping over it.

    Where is Ozone pulling the location information from? I am not even sure what kind of corruption or overflow could be happening to clobber that, since it should just be pulled from the ELF file ...?

    Interesting.
    Will J-Link support OpenFlashLoader loaders, if they are added to the XML file?

    I remapped the external flash bank to a base of 0xC0000000, which is unused on my micro, in both the FlashDev.c and in the XML file.
    J-Flash works as expected when operating from these addresses, but J-Link commander only returns "Could not read memory" when trying:
    mem 0xC0000000,0x1FF